Search Results: "mt"

7 November 2023

Jonathan Dowland: gitsigns (useful neovim plugins)

gitsigns is a Neovim plugin which adds a wonderfully subtle colour annotation in the left-hand gutter to reflect changes in the buffer since the last git commit1. My long-term habit with Vim and Git is to frequently background Vim (^Z) to invoke the git command directly and then foreground Vim again (fg). Over the last few years I've been trying more and more to call vim-fugitive from within Vim instead. (I still do rebases and most merges the old-fashioned way). For the most part Gitsigns is a nice passive addition to that, but it can also do a lot of useful things that Fugitive also does. Previewing changed hunks in a little floating window, in particular when resolving an awkward merge conflict, is very handy.
An example screenshot of the Gitsigns plugin
The above picture shows it in action. I've changed two lines and added another three in the top-left buffer. Gitsigns tries to choose colours from the currently active colour scheme (in this case, tender)

  1. by default Gitsigns shows changes since the last commit, as git diff does, but you can easily switch out the base from which it compares.

26 October 2023

Dima Kogan: Talking to ROS from outside a LAN

The problem
This is about ROS version 1. Version 2 is different, and maybe they fixed stuff. But I kinda doubt it since this thing is heinous in a million ways. Alright so let's say we have have some machines in a LAN doing ROS stuff and we have another machine outside the LAN that wants to listen in (like to get a realtime visualization, say). This is an extremely common scenario, but they created enough hoops to make this not work. Let's say we have 3 computers:
  • router: the bridge between the two networks. This has two NICs. The inner IP is 10.0.1.1 and the outer IP is 12.34.56.78
  • inner: a machine in the LAN that's doing ROS stuff. IP 10.0.1.99
  • outer: a machine outside that LAN that wants to listen in. IP 12.34.56.99
Let's say the router is doing ROS stuff. It's running the ROS master and some nodes like this:
ROS_IP=10.0.1.1 roslaunch whatever
If you omit the ROS_IP it'll pick router, which may or may not work, depending on how the DNS is set up. Here we set it to 10.0.1.1 to make it possible for the inner machine to communicate (we'll see why in a bit). An aside: ROS should use the IP by default instead of the name because the IP will work even if the DNS isn't set up. If there are multiple extant IPs, it should throw an error. But all that would be way too user-friendly. OK. So we have a ROS master on 10.0.1.1 on the default port: 11311. The inner machine can rostopic echo and all that. Great. What if I try to listen in from outer? I say
ROS_MASTER_URI=http://12.34.56.78:11311 rostopic list
This connects to the router on that port, and it works well: I get the list of available topics. Here this works because the router is the router. If inner was running the ROS master then we'd need to do a forward for port 11311. In any case, this works and we understand it. So clearly we can talk to the ROS master. Right? Wrong! Let's actually listen in on a specific topic on outer:
ROS_MASTER_URI=http://12.34.56.78:11311 rostopic echo /some/topic
This does not work. No errors are reported. It just sits there, which looks like no data is coming in on that topic. But this is a lie: it's actually broken.

The diagnosis
So this is our problem. It's a very common use case, and there are plenty of internet people asking about it, with no specific solutions. I debugged it, and the details are here. To figure out what's going on, I made a syscall log on a machine inside the LAN, where a simple rostopic echo does work:
sysdig -A proc.name=rostopic and fd.type contains ipv -s 2000
This shows us all the communication between inner running rostopic and the server. It's really chatty. It's all TCP. There are multiple connections to the router on port 11311. It also starts up multiple TCP servers on the client that listen to connections; these are likely to be broken if we were running the client on outer and a machine inside the LAN tried to talk to them; but thankfully in my limited testing nothing actually tried to talk to them. The conversations on port 11311 are really long, but here's the punchline. inner tells the router:
POST /RPC2 HTTP/1.1                                                                                                                 
Host: 10.0.1.1:11311                                                                                                          
Accept-Encoding: gzip                                                                                                               
Content-Type: text/xml                                                                                                              
User-Agent: Python-xmlrpc/3.11                                                                                                      
Content-Length: 390                                                                                                                 
<?xml version='1.0'?>
<methodCall>
<methodName>registerSubscriber</methodName>
<params>
<param>
<value><string>/rostopic_2447878_1698362157834</string></value>
</param>
<param>
<value><string>/some/topic</string></value>
</param>
<param>
<value><string>*</string></value>
</param>
<param>
<value><string>http://inner:38229/</string></value>
</param>
</params>
</methodCall>
Yes. It's laughably chatty. Then the router replies:
HTTP/1.1 200 OK
Server: BaseHTTP/0.6 Python/3.8.10
Date: Thu, 26 Oct 2023 23:15:28 GMT
Content-type: text/xml
Content-length: 342
<?xml version='1.0'?>
<methodResponse>
<params>
<param>
<value><array><data>
<value><int>1</int></value>
<value><string>Subscribed to [/some/topic]</string></value>
<value><array><data>
<value><string>http://10.0.1.1:45517/</string></value>
</data></array></value>
</data></array></value>
</param>
</params>
</methodResponse>
Then this sequence of system calls happens in the rostopic process (an excerpt from the sysdig log):
> connect fd=10(<4>) addr=10.0.1.1:45517
< connect res=-115(EINPROGRESS) tuple=10.0.1.99:47428->10.0.1.1:45517 fd=10(<4t>10.0.1.99:47428->10.0.1.1:45517)
< getsockopt res=0 fd=10(<4t>10.0.1.99:47428->10.0.1.1:45517) level=1(SOL_SOCKET) optname=4(SO_ERROR) val=0 optlen=4
So the inner client makes an outgoing TCP connection on the address given to it by the ROS master above: 10.0.1.1:45517. This IP is only accessible from within the LAN, which works fine when talking to it from inner, but would be a problem from the outside. Furthermore, some sort of single-port-forwarding scheme wouldn't fix connecting from outer either, since the port number is dynamic. To confirm what we think is happening, the sequence of syscalls when trying to rostopic echo from outer does indeed fail:
connect fd=10(<4>) addr=10.0.1.1:45517 
connect res=-115(EINPROGRESS) tuple=10.0.1.1:46204->10.0.1.1:45517 fd=10(<4t>10.0.1.1:46204->10.0.1.1:45517)
getsockopt res=0 fd=10(<4t>10.0.1.1:46204->10.0.1.1:45517) level=1(SOL_SOCKET) optname=4(SO_ERROR) val=-111(ECONNREFUSED) optlen=4
That's the breakage mechanism: the ROS master asks us to communicate on an address we can't talk to. Debugging this is easy with sysdig:
sudo sysdig -A -s 400 evt.buffer contains '"Subscribed to"' and proc.name=rostopic
This prints out all syscalls seen by the rostopic command that contain the string Subscribed to, so you can see that different addresses the ROS master gives us in response to different commands. OK. So can we get the ROS master to give us an address that we can actually talk to? Sorta. Remember that we invoked the master with
ROS_IP=10.0.1.1 roslaunch whatever
The ROS_IP environment variable is exactly the address that the master gives out. So in this case, we can fix it by doing this instead:
ROS_IP=12.34.56.78 roslaunch whatever
Then the outer machine will be asked to talk to 12.34.56.78:45517, which works. Unfortunately, if we do that, then the inner machine won't be able to communicate. So some sort of ssh port forward cannot fix this: we need a lower-level tunnel, like a VPN or something. And another rant. Here rostopic tried to connect to an unreachable address, which failed. But rostopic knows the connection failed! It should throw an error message to the user. Something like this would be wonderful:
ERROR! Tried to connect to 10.0.1.1:45517 ($ROS_IP:dynamicport), but connect() returned ECONNREFUSED
That would be immensely helpful. It would tell the user that something went wrong (instead of no data being sent), and it would give a strong indication of the problem and how to fix it. But that would be asking too much.

The solution
So we need a VPN-like thing. I just tried sshuttle, and it just works. Start the ROS node in the way that makes connections from within the LAN work:
ROS_IP=10.0.1.1 roslaunch whatever
Then on the outer client:
sshuttle -r router 10.0.1.0/24
This connects to the router over ssh and does some hackery to make all connections from outer to 10.0.1.x transparently route into the LAN. On all ports. rostopic echo then works. I haven't done any thorough testing, but hopefully it's reliable and has low overhead; I don't know. I haven't tried it but almost certainly this would work even with the ROS master running on inner. This would be accomplished like this:
  1. Tell ssh how to connect to inner. Dropping this into ~/.ssh/config should do it:
    Host inner
    HostName 10.0.1.99
    ProxyJump router
    
  2. Do the magic thing:
    sshuttle -r inner 10.0.1.0/24
    
I'm sure any other VPN-like thing would work also.

23 October 2023

Jonathan Dowland: cherished

minidisc player
iPod
Bose headphones
If I think back to technology I've used and really cherished, quite often they're audio-related: Minidisc players, Walkmans, MP3 players, headphones. These pieces of technology served as vessels to access music, which of course I often have fond emotional connection to. And so I think the tech has benefited from that, and in some way the fondness or emotional connection to music has somewhat transferred or rubbed-off on the technology to access it. Put another way, no matter how well engineered it was, how easy it was to use or how well it did the job, I doubt I'd have fond memories, years later, of a toilet brush. I wonder if the same "bleeding" of fondness applies to brands, too. If so, and if you were a large tech company, it would be worth having some audio gear in your portfolio. I think Sony must have benefited from this. Apple too. on-ear phones For listening on-the-go, I really like on-ear headphones, as opposed to over-ear. I have some lovely over-ear phones for listening-at-rest, but they get my head too hot when I'm active. The on-ears are a nice compromise between comfort and quality of over-ear, and portability of in-ear. Most of the ones I've owned have folded up nicely into a coat pocket too. My current Bose pair are from 2019 and might be towards the end of their life. They replaced some AKG K451s, which were also discontinued. Last time I looked (2019) the Sony offerings in this product category were not great. That might have changed. But I fear that the manufacturers have collectively decided this product category isn't worth investing in.

12 October 2023

Reproducible Builds: Reproducible Builds in September 2023

Welcome to the September 2023 report from the Reproducible Builds project In these reports, we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.
Andreas Herrmann gave a talk at All Systems Go 2023 titled Fast, correct, reproducible builds with Nix and Bazel . Quoting from the talk description:

You will be introduced to Google s open source build system Bazel, and will learn how it provides fast builds, how correctness and reproducibility is relevant, and how Bazel tries to ensure correctness. But, we will also see where Bazel falls short in ensuring correctness and reproducibility. You will [also] learn about the purely functional package manager Nix and how it approaches correctness and build isolation. And we will see where Bazel has an advantage over Nix when it comes to providing fast feedback during development.
Andreas also shows how you can get the best of both worlds and combine Nix and Bazel, too. A video of the talk is available.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb fixed compatibility with file(1) version 5.45 [ ] and updated some documentation [ ]. In addition, Vagrant Cascadian extended support for GNU Guix [ ][ ] and updated the version in that distribution as well. [ ].
Yet another reminder that our upcoming Reproducible Builds Summit is set to take place from October 31st November 2nd 2023 in Hamburg, Germany. If you haven t been before, our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. If you re interested in joining us this year, please make sure to read the event page, the news item, or the invitation email that Mattia Rizzolo sent out recently, all of which have more details about the event and location. We are also still looking for sponsors to support the event, so please reach out to the organising team if you are able to help. Also note that PackagingCon 2023 is taking place in Berlin just before our summit.
On the Reproducible Builds website, Greg Chabala updated the JVM-related documentation to update a link to the BUILDSPEC.md file. [ ] And Fay Stegerman fixed the builds failing because of a YAML syntax error.

Distribution work In Debian, this month: September saw F-Droid add ten new reproducible apps, and one existing app switched to reproducible builds. In addition, two reproducible apps were archived and one was disabled for a current total of 199 apps published with Reproducible Builds and using the upstream developer s signature. [ ] In addition, an extensive blog post was posted on f-droid.org titled Reproducible builds, signing keys, and binary repos .

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen:
  • Disable armhf and i386 builds due to Debian bug #1052257. [ ][ ][ ][ ]
  • Run diffoscope with a lower ionice priority. [ ]
  • Log every build in a simple text file [ ] and create persistent stamp files when running diffoscope to ease debugging [ ].
  • Run schedulers one hour after dinstall again. [ ]
  • Temporarily use diffoscope from the host, and not from a schroot running the tested suite. [ ][ ]
  • Fail the diffoscope distribution test if the diffoscope version cannot be determined. [ ]
  • Fix a spelling error in the email to IRC gateway. [ ]
  • Force (and document) the reconfiguration of all jobs, due to the recent rise of zombies. [ ][ ][ ][ ]
  • Deal with a rare condition when killing processes which should not be there. [ ]
  • Install the Debian backports kernel in an attempt to address Debian bug #1052257. [ ][ ]
In addition, Mattia Rizzolo fixed a call to diffoscope --version (as suggested by Fay Stegerman on our mailing list) [ ], worked on an openQA credential issue [ ] and also made some changes to the machine-readable reproducible metadata, reproducible-tracker.json [ ]. Lastly, Roland Clobus added instructions for manual configuration of the openQA secrets [ ].

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

8 October 2023

Sahil Dhiman: Lap 24

Twenty-four is a big number. More than one/fourth (or more) of my life is behind me now. At this point, I truly feel like I have become an adult; mentally and physically. Another year seem to have gone by quickly. I still vividly remember writing 23 and Counting and here I m writing the next one so soon. Probably the lowest I felt ever on my birthday; with loss of Abraham and on the other hand, medical issues with a dear one. Didn t even felt like birthday was almost here. The loss of Abraham, taught me to care for people more and meet cherish everyone. I m grateful for all the people who supported and cared for me and others during times of grief when things went numb. Thank you! Also, for the first time ever, I went to office on my birthday. This probably would become a norm in coming years. Didn t felt like doing anything, so just went to office. The cake, wishes and calls kept coming in throughout the day. I m grateful for the all people around for remembering :) This year marked my first official job switch where I moved from MakeMyTrip to Unmukti as a GNU/Linux Network Systems engineer (that s a mouthful of a job role, I know) where I do anything and everything ranging from system admin, network engineering, a bit of social media, chronicling stuff on company blog and bringing up new applications as per requirement. Moving from MMT to Unmukti was a big cultural shift. From a full-blown corporate with more than 3 thousand employees to a small 5-person team. People still think I work for a startup on hearing the low head count, though Unmukti is a 13 year old organization. I get the freedom to work at my own pace and put my ideas in larger technical discussions, while also actively participating in the community, which I m truly grateful of. I go full geek here and almost everyone here is on the same spectrum, so things technical or societal discussions just naturally flow. The months of August-September again marked the Great Refresh. For reasons unforeseen, I have had to pack my stuff again and move, albeit to just next door for now, but that gave me the much need opportunity to sift through my belongings here. As usual, I threw a boatload of stuff which was of no use and/or just hogging space. My wardrobe cupboard finally got cleaned and sorted, with old and new clothes getting (re)discovered. The Refresh is always a pain with loads of collating stuff in carry worthy bags and hauling stuff but as usual, there s nothing else I can do other than just pack and move. This year also culminated our four plus years of work for organizing annual Debian conference, DebConf to India. DebConf23 happened in Kochi, Kerala from 3rd September to 17th September (including DebCamp). First concrete work to bring conference to India was done Raju Dev who made the first bid during DebConf18 in Hsinchu, Taiwan. We lost but won during the next year bid at DebConf19 Curitiba, Brazil in 2019. I joined the efforts after meeting the team online after DebConf20. Initially started with the publicity team, but we didn t need much publicity for event, I was later asked to join sponsors/fundraising team. That turned out to be quite an experience. Then the conference itself turned to be a good experience. More on that in an upcoming DebConf23 blog post, which will come eventually. After seeing how things work out in Debian in 2020, I had the goal to become a Debian Developer (DD) before DebConf23, which gave me almost three years to get involved and get recognized to become a DD. I was more excited to grab sahil AT debian.org, a short email with only my name and no number of characters after it. After, quite a while, I dropped the hope of become a DD because I wasn t successful in my attempts to meaningfully package and technically contribute to the project. But people in Debian India later convinced me that I have done enough to become a Debian Developer, non uploading, purely by showing up and helping around all for the Debian conferences. I applied and got sponsored (i.e. supported) for my request by srud and Praveen. Finally, on 23rd Feb, I officially became part of Debian project as 14th (at the moment) Debian Developer from India. Got sahil AT debian.org too :) For some grace, I also became a DD before DebConf23. Becoming a DD didn t change anything much though, I still believe, it might have helped secure me a job though. Also, worth mentioning is my increased interest in OpenStreetMap (OSM) mapping. I heavily mapped this year and went around for mapathon-meetups too. One step towards a better OSM and more community engagement around it. Looking back at my blog, this year around, it seems mostly dotted with Debian and one OSM post. Significant shift from the range of topics I use to write about in the past year but blogging this year wasn t a go-to activity. Other stuff kept me busy. Living in Gurugram has shown me many facades of life from which I was shielded or didn t come across earlier. It made me realize all the privileges which has helped me along the way, which became apparent while living almost alone here and managing thing by oneself.

7 October 2023

Andrew Cater: Point release weekend for Debian: two releases this weekend: 202311071653

Over in Cambridge with RattusRattus, Sledge, egw and Isy. Andy is very kindly putting us up.

We're almost all of the way through testing 12.2 and some of the way through testing 11.8.

It's a LONG day - heads down into laptops and relatively quiet - I think we're all tired and we've a way to go yet.


2 October 2023

Jonathan Dowland: Promotion

It's been quiet here (I hope to change that), but I want to share some good news: I've been promoted to Principal Software Engineer! Next February will start my 9th year with Red Hat. Time flies when you're having fun!

30 September 2023

Ian Jackson: DKIM: rotate and publish your keys

If you are an email system administrator, you are probably using DKIM to sign your outgoing emails. You should be rotating the key regularly and automatically, and publishing old private keys. I have just released dkim-rotate 1.0; dkim-rotate is a tool to do this key rotation and publication. If you are an email user, your email provider ought to be doing this. If this is not done, your emails are non-repudiable , meaning that if they are leaked, anyone (eg, journalists, haters) can verify that they are authentic, and prove that to others. This is not desirable (for you). Non-repudiation of emails is undesirable This problem was described at some length in Matthew Green s article Ok Google: please publish your DKIM secret keys. Avoiding non-repudiation sounds a bit like lying. After all, I m advising creating a situation where some people can t verify that something is true, even though it is. So I m advocating casting doubt. Crucially, though, it s doubt about facts that ought to be private. When you send an email, that s between you and the recipient. Normally you don t intend for anyone, anywhere, who happens to get a copy, to be able to verify that it was really you that sent it. In practical terms, this verifiability has already been used by journalists to verify stolen emails. Associated Press provide a verification tool. Advice for all email users As a user, you probably don t want your emails to be non-repudiable. (Other people might want to be able to prove you sent some email, but your email system ought to serve your interests, not theirs.) So, your email provider ought to be rotating their DKIM keys, and publishing their old ones. At a rough guess, your provider probably isn t :-(. How to tell by looking at email headers A quick and dirty way to guess is to have a friend look at the email headers of a message you sent. (It is important that the friend uses a different email provider, since often DKIM signatures are not applied within a single email system.) If your friend sees a DKIM-Signature header then the message is DKIM signed. If they don t, then it wasn t. Most email traversing the public internet is DKIM signed nowadays; so if they don t see the header probably they re not looking using the right tools, or they re actually on the same email system as you. In messages signed by a system running dkim-rotate, there will also be a header about the key rotation, to notify potential verifiers of the situation. Other systems that avoid non-repudiation-through-DKIM might do something similar. dkim-rotate s header looks like this:
DKIM-Signature-Warning: NOTE REGARDING DKIM KEY COMPROMISE
 https://www.chiark.greenend.org.uk/dkim-rotate/README.txt
 https://www.chiark.greenend.org.uk/dkim-rotate/ae/aeb689c2066c5b3fee673355309fe1c7.pem
But an email system might do half of the job of dkim-rotate: regularly rotating the key would cause the signatures of old emails to fail to verify, which is a good start. In that case there probably won t be such a header. Testing verification of new and old messages You can also try verifying the signatures. This isn t entirely straightforward, especially if you don t have access to low-level mail tooling. Your friend will need to be able to save emails as raw whole headers and body, un-decoded, un-rendered. If your friend is using a traditional Unix mail program, they should save the message as an mbox file. Otherwise, ProPublica have instructions for attaching and transferring and obtaining the raw email. (Scroll down to How to Check DKIM and ARC .) Checking that recent emails are verifiable Firstly, have your friend test that they can in fact verify a DKIM signature. This will demonstrate that the next test, where the verification is supposed to fail, is working properly and fails for the right reasons. Send your friend a test email now, and have them do this on a Linux system:
    # save the message as test-email.mbox
    apt install libmail-dkim-perl # or equivalent on another distro
    dkimproxy-verify <test-email.mbox
You should see output containing something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: pass
    ...
If the output ontains verify result: fail (body has been altered) then probably your friend didn t manage to faithfully save the unalterered raw message. Checking old emails cannot be verified When you both have that working, have your friend find an older email of yours, from (say) month ago. Perform the same steps. Hopefully they will see something like this:
    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: fail (bad RSA signature)
or maybe
    verify result: invalid (public key: not available)
This indicates that this old email can no longer be verified. That s good: it means that anyone who steals a copy, can t verify it either. If it s leaked, the journalist who receives it won t know it s genuine and unmodified; they should then be suspicious. If your friend sees verify result: pass, then they have verified that that old email of yours is genuine. Anyone who had a copy of the mail can do that. This is good for email thieves, but not for you. For email admins: announcing dkim-rotate 1.0 I have been running dkim-rotate 0.4 on my infrastructure, since last August. and I had entirely forgotten about it: it has run flawlessly for a year. I was reminded of the topic by seeing DKIM in other blog posts. Obviously, it is time to decreee that dkim-rotate is 1.0. If you re a mail system administrator, your users are best served if you use something like dkim-rotate. The package is available in Debian stable, and supports Exim out of the box, but other MTAs should be easy to support too, via some simple ad-hoc scripting. Limitation of this approach Even with this key rotation approach, emails remain nonrepudiable for a short period after they re sent - typically, a few days. Someone who obtains a leaked email very promptly, and shows it to the journalist (for example) right away, can still convince the journalist. This is not great, but at least it doesn t apply to the vast bulk of your email archive. There are possible email protocol improvements which might help, but they re quite out of scope for this article.
Edited 2023-10-01 00:20 +01:00 to fix some grammar


comment count unavailable comments

25 September 2023

Michael Prokop: Postfix failing with no shared cipher

I m one of the few folks left who run and maintain mail servers. Recently I had major troubles receiving mails from the mail servers used by a bank, and when asking my favourite search engine, I m clearly not the only one who ran into such an issue. Actually, I should have checked off the issue and not become a customer at that bank, but the tech nerd in me couldn t resist getting to the bottom of the problem. Since I got it working and this might be useful for others, here we are. :) I was trying to get an online banking account set up, but the corresponding account creation mail didn t arrive me, at all. Looking at my mail server logs, my postfix mail server didn t accept the mail due to:
postfix/smtpd[3319640]: warning: TLS library problem: error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher:../ssl/statem/statem_srvr.c:2283:
postfix/smtpd[3319640]: lost connection after STARTTLS from mx01.arz.at[193.110.182.61]
Huh, what s going on here?! Let s increase the TLS loglevel (setting smtpd_tls_loglevel = 2) and retry. But how can I retry receiving yet another mail? Luckily, on the registration website of the bank there was a URL available, that let me request a one-time password. This triggered another mail, so I did that and managed to grab this in the logs:
postfix/smtpd[3320018]: initializing the server-side TLS engine
postfix/tlsmgr[3320020]: open smtpd TLS cache btree:/var/lib/postfix/smtpd_scache
postfix/tlsmgr[3320020]: tlsmgr_cache_run_event: start TLS smtpd session cache cleanup
postfix/smtpd[3320018]: connect from mx01.arz.at[193.110.182.61]
postfix/smtpd[3320018]: setting up TLS connection from mx01.arz.at[193.110.182.61]
postfix/smtpd[3320018]: mx01.arz.at[193.110.182.61]: TLS cipher list "aNULL:-aNULL:HIGH:MEDIUM:+RC4:@STRENGTH"
postfix/smtpd[3320018]: SSL_accept:before SSL initialization
postfix/smtpd[3320018]: SSL_accept:before SSL initialization
postfix/smtpd[3320018]: SSL3 alert write:fatal:handshake failure
postfix/smtpd[3320018]: SSL_accept:error in error
postfix/smtpd[3320018]: SSL_accept error from mx01.arz.at[193.110.182.61]: -1
postfix/smtpd[3320018]: warning: TLS library problem: error:1417A0C1:SSL routines:tls_post_process_client_hello:no shared cipher:../ssl/statem/statem_srvr.c:2283:
postfix/smtpd[3320018]: lost connection after STARTTLS from mx01.arz.at[193.110.182.61]
postfix/smtpd[3320018]: disconnect from mx01.arz.at[193.110.182.61] ehlo=1 starttls=0/1 commands=1/2
postfix/smtpd[3320018]: connect from mx01.arz.at[193.110.182.61]
postfix/smtpd[3320018]: disconnect from mx01.arz.at[193.110.182.61] ehlo=1 quit=1 commands=2
Ok, so this TLS cipher list aNULL:-aNULL:HIGH:MEDIUM:+RC4:@STRENGTH looked like the tls_medium_cipherlist setting in postfix, but which ciphers might we expect? Let s see what their SMTP server would speak to us:
% testssl --cipher-per-proto -t=smtp mx01.arz.at:25
[...]
Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)
-----------------------------------------------------------------------------------------------------------------------------
SSLv2
SSLv3
TLS 1
TLS 1.1
TLS 1.2
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 256   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
 xc028   ECDHE-RSA-AES256-SHA384           ECDH 256   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
 xc014   ECDHE-RSA-AES256-SHA              ECDH 256   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 x9d     AES256-GCM-SHA384                 RSA        AESGCM      256      TLS_RSA_WITH_AES_256_GCM_SHA384
 x3d     AES256-SHA256                     RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA256
 x35     AES256-SHA                        RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 256   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
 xc027   ECDHE-RSA-AES128-SHA256           ECDH 256   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
 xc013   ECDHE-RSA-AES128-SHA              ECDH 256   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
 x9c     AES128-GCM-SHA256                 RSA        AESGCM      128      TLS_RSA_WITH_AES_128_GCM_SHA256
 x3c     AES128-SHA256                     RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA256
 x2f     AES128-SHA                        RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA
TLS 1.3
Looks like a very small subset of ciphers, and they don t seem to be talking TLS v1.3 at all? Not great. :( A nice web service to verify the situation from another point of view is checktls, which also confirmed this:
[000.705] 	<-- 	220 2.0.0 Ready to start TLS
[000.705] 		STARTTLS command works on this server
[001.260] 		Connection converted to SSL
		SSLVersion in use: TLSv1_2
		Cipher in use: ECDHE-RSA-AES256-GCM-SHA384
		Perfect Forward Secrecy: yes
		Session Algorithm in use: Curve P-256 DHE(256 bits)
		Certificate #1 of 3 (sent by MX):
		Cert VALIDATED: ok
		Cert Hostname VERIFIED (mx01.arz.at = *.arz.at   DNS:*.arz.at   DNS:arz.at)
[...]
[001.517] 		TLS successfully started on this server
I got distracted by some other work, and when coming back to this problem, the one-time password procedure no longer worked, as the password reset URL was no longer valid. :( I managed to find the underlying URL, and with some web developer tools tinkering I could still use the website to let me trigger sending further one-time password mails, phew. Let s continue, so my mail server was running Debian/bullseye with postfix v3.5.18-0+deb11u1 and openssl v1.1.1n-0+deb11u5, let s see what it offers:
% testssl --cipher-per-proto -t=smtp mail.example.com:25
[...]
Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)
-----------------------------------------------------------------------------------------------------------------------------
SSLv2
SSLv3
TLS 1
 xc00a   ECDHE-ECDSA-AES256-SHA            ECDH 253   AES         256      TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
 xc019   AECDH-AES256-SHA                  ECDH 253   AES         256      TLS_ECDH_anon_WITH_AES_256_CBC_SHA
 x3a     ADH-AES256-SHA                    DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA
 x89     ADH-CAMELLIA256-SHA               DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA
 xc009   ECDHE-ECDSA-AES128-SHA            ECDH 253   AES         128      TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
 xc018   AECDH-AES128-SHA                  ECDH 253   AES         128      TLS_ECDH_anon_WITH_AES_128_CBC_SHA
 x34     ADH-AES128-SHA                    DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA
 x9b     ADH-SEED-SHA                      DH 2048    SEED        128      TLS_DH_anon_WITH_SEED_CBC_SHA
 x46     ADH-CAMELLIA128-SHA               DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA
TLS 1.1
 xc00a   ECDHE-ECDSA-AES256-SHA            ECDH 253   AES         256      TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
 xc019   AECDH-AES256-SHA                  ECDH 253   AES         256      TLS_ECDH_anon_WITH_AES_256_CBC_SHA
 x3a     ADH-AES256-SHA                    DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA
 x89     ADH-CAMELLIA256-SHA               DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA
 xc009   ECDHE-ECDSA-AES128-SHA            ECDH 253   AES         128      TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
 xc018   AECDH-AES128-SHA                  ECDH 253   AES         128      TLS_ECDH_anon_WITH_AES_128_CBC_SHA
 x34     ADH-AES128-SHA                    DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA
 x9b     ADH-SEED-SHA                      DH 2048    SEED        128      TLS_DH_anon_WITH_SEED_CBC_SHA
 x46     ADH-CAMELLIA128-SHA               DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA
TLS 1.2
 xc02c   ECDHE-ECDSA-AES256-GCM-SHA384     ECDH 253   AESGCM      256      TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
 xc024   ECDHE-ECDSA-AES256-SHA384         ECDH 253   AES         256      TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
 xc00a   ECDHE-ECDSA-AES256-SHA            ECDH 253   AES         256      TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
 xcca9   ECDHE-ECDSA-CHACHA20-POLY1305     ECDH 253   ChaCha20    256      TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
 xc0af   ECDHE-ECDSA-AES256-CCM8           ECDH 253   AESCCM8     256      TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8
 xc0ad   ECDHE-ECDSA-AES256-CCM            ECDH 253   AESCCM      256      TLS_ECDHE_ECDSA_WITH_AES_256_CCM
 xc073   ECDHE-ECDSA-CAMELLIA256-SHA384    ECDH 253   Camellia    256      TLS_ECDHE_ECDSA_WITH_CAMELLIA_256_CBC_SHA384
 xc019   AECDH-AES256-SHA                  ECDH 253   AES         256      TLS_ECDH_anon_WITH_AES_256_CBC_SHA
 xa7     ADH-AES256-GCM-SHA384             DH 2048    AESGCM      256      TLS_DH_anon_WITH_AES_256_GCM_SHA384
 x6d     ADH-AES256-SHA256                 DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA256
 x3a     ADH-AES256-SHA                    DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA
 xc5     ADH-CAMELLIA256-SHA256            DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA256
 x89     ADH-CAMELLIA256-SHA               DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA
 xc05d   ECDHE-ECDSA-ARIA256-GCM-SHA384    ECDH 253   ARIAGCM     256      TLS_ECDHE_ECDSA_WITH_ARIA_256_GCM_SHA384
 xc02b   ECDHE-ECDSA-AES128-GCM-SHA256     ECDH 253   AESGCM      128      TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
 xc023   ECDHE-ECDSA-AES128-SHA256         ECDH 253   AES         128      TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
 xc009   ECDHE-ECDSA-AES128-SHA            ECDH 253   AES         128      TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
 xc0ae   ECDHE-ECDSA-AES128-CCM8           ECDH 253   AESCCM8     128      TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8
 xc0ac   ECDHE-ECDSA-AES128-CCM            ECDH 253   AESCCM      128      TLS_ECDHE_ECDSA_WITH_AES_128_CCM
 xc072   ECDHE-ECDSA-CAMELLIA128-SHA256    ECDH 253   Camellia    128      TLS_ECDHE_ECDSA_WITH_CAMELLIA_128_CBC_SHA256
 xc018   AECDH-AES128-SHA                  ECDH 253   AES         128      TLS_ECDH_anon_WITH_AES_128_CBC_SHA
 xa6     ADH-AES128-GCM-SHA256             DH 2048    AESGCM      128      TLS_DH_anon_WITH_AES_128_GCM_SHA256
 x6c     ADH-AES128-SHA256                 DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA256
 x34     ADH-AES128-SHA                    DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA
 xbf     ADH-CAMELLIA128-SHA256            DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA256
 x9b     ADH-SEED-SHA                      DH 2048    SEED        128      TLS_DH_anon_WITH_SEED_CBC_SHA
 x46     ADH-CAMELLIA128-SHA               DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA
 xc05c   ECDHE-ECDSA-ARIA128-GCM-SHA256    ECDH 253   ARIAGCM     128      TLS_ECDHE_ECDSA_WITH_ARIA_128_GCM_SHA256
TLS 1.3
 x1302   TLS_AES_256_GCM_SHA384            ECDH 253   AESGCM      256      TLS_AES_256_GCM_SHA384
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 253   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256
 x1301   TLS_AES_128_GCM_SHA256            ECDH 253   AESGCM      128      TLS_AES_128_GCM_SHA256
Not so bad, but sadly no overlap with any of the ciphers that mx01.arz.at offers. What about disabling STARTTLS for the mx01.arz.at (+ mx02.arz.at being another one used by the relevant domain) mail servers when talking to mine? Let s try that:
% sudo postconf -nf smtpd_discard_ehlo_keyword_address_maps
smtpd_discard_ehlo_keyword_address_maps =
    hash:/etc/postfix/smtpd_discard_ehlo_keywords
% cat /etc/postfix/smtpd_discard_ehlo_keywords
# *disable* starttls for mx01.arz.at / mx02.arz.at:
193.110.182.61 starttls
193.110.182.62 starttls
But the remote mail server doesn t seem to send mails without TLS:
postfix/smtpd[4151799]: connect from mx01.arz.at[193.110.182.61]
postfix/smtpd[4151799]: discarding EHLO keywords: STARTTLS
postfix/smtpd[4151799]: disconnect from mx01.arz.at[193.110.182.61] ehlo=1 quit=1 commands=2
Let s verify this further, but without fiddling with the main mail server too much. We can add a dedicated service to postfix (see serverfault), and run it in verbose mode, to get more detailled logging:
% sudo postconf -Mf
[...]
10025      inet  n       -       -       -       -       smtpd
    -o syslog_name=postfix/smtpd/badstarttls
    -o smtpd_tls_security_level=none
    -o smtpd_helo_required=yes
    -o smtpd_helo_restrictions=pcre:/etc/postfix/helo_badstarttls_allow,reject
    -v
[...]
% cat /etc/postfix/helo_badstarttls_allow
/mx01.arz.at/ OK
/mx02.arz.at/ OK
/193.110.182.61/ OK
/193.110.182.62/ OK
We redirect the traffic from mx01.arz.at + mx02.arz.at towards our new postfix service, listening on port 10025:
% sudo iptables -t nat -A PREROUTING -p tcp -s 193.110.182.61 --dport 25 -j REDIRECT --to-port 10025
% sudo iptables -t nat -A PREROUTING -p tcp -s 193.110.182.62 --dport 25 -j REDIRECT --to-port 10025
With this setup we get very detailed logging, and it seems to confirm our suspicion that the mail server doesn t want to talk unencrypted with us:
[...]
postfix/smtpd/badstarttls/smtpd[3491900]: connect from mx01.arz.at[193.110.182.61]
[...]
postfix/smtpd/badstarttls/smtpd[3491901]: disconnect from mx01.arz.at[193.110.182.61] ehlo=1 quit=1 commands=2
postfix/smtpd/badstarttls/smtpd[3491901]: master_notify: status 1
postfix/smtpd/badstarttls/smtpd[3491901]: connection closed
[...]
Let s step back and revert those changes, back to our original postfix setup. Might the problem be related to our Let s Encrypt certificate? Let s see what we have:
% echo QUIT   openssl s_client -connect mail.example.com:25 -starttls
[...]
issuer=C = US, O = Let's Encrypt, CN = R3
---
No client certificate CA names sent
Peer signing digest: SHA384
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 4455 bytes and written 427 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 384 bit
[...]
We have an ECDSA based certificate, what about switching to RSA instead? Thanks to the wonderful dehydrated, this is as easy as:
% echo KEY_ALGO=rsa > certs/mail.example.com/config
% ./dehydrated -c --domain mail.example.com --force
% sudo systemctl reload postfix
With switching to RSA type key we get:
% echo QUIT   openssl s_client -connect mail.example.com:25 -starttls smtp
CONNECTED(00000003)
[...]
issuer=C = US, O = Let's Encrypt, CN = R3
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 5295 bytes and written 427 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 4096 bit
Which ciphers do we offer now? Let s check:
% testssl --cipher-per-proto -t=smtp mail.example.com:25
[...]
Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)
-----------------------------------------------------------------------------------------------------------------------------
SSLv2
SSLv3
TLS 1
 xc014   ECDHE-RSA-AES256-SHA              ECDH 253   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 x39     DHE-RSA-AES256-SHA                DH 2048    AES         256      TLS_DHE_RSA_WITH_AES_256_CBC_SHA
 x88     DHE-RSA-CAMELLIA256-SHA           DH 2048    Camellia    256      TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
 xc019   AECDH-AES256-SHA                  ECDH 253   AES         256      TLS_ECDH_anon_WITH_AES_256_CBC_SHA
 x3a     ADH-AES256-SHA                    DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA
 x89     ADH-CAMELLIA256-SHA               DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA
 x35     AES256-SHA                        RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA
 x84     CAMELLIA256-SHA                   RSA        Camellia    256      TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
 xc013   ECDHE-RSA-AES128-SHA              ECDH 253   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
 x33     DHE-RSA-AES128-SHA                DH 2048    AES         128      TLS_DHE_RSA_WITH_AES_128_CBC_SHA
 x9a     DHE-RSA-SEED-SHA                  DH 2048    SEED        128      TLS_DHE_RSA_WITH_SEED_CBC_SHA
 x45     DHE-RSA-CAMELLIA128-SHA           DH 2048    Camellia    128      TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
 xc018   AECDH-AES128-SHA                  ECDH 253   AES         128      TLS_ECDH_anon_WITH_AES_128_CBC_SHA
 x34     ADH-AES128-SHA                    DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA
 x9b     ADH-SEED-SHA                      DH 2048    SEED        128      TLS_DH_anon_WITH_SEED_CBC_SHA
 x46     ADH-CAMELLIA128-SHA               DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA
 x2f     AES128-SHA                        RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA
 x96     SEED-SHA                          RSA        SEED        128      TLS_RSA_WITH_SEED_CBC_SHA
 x41     CAMELLIA128-SHA                   RSA        Camellia    128      TLS_RSA_WITH_CAMELLIA_128_CBC_SHA
TLS 1.1
 xc014   ECDHE-RSA-AES256-SHA              ECDH 253   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 x39     DHE-RSA-AES256-SHA                DH 2048    AES         256      TLS_DHE_RSA_WITH_AES_256_CBC_SHA
 x88     DHE-RSA-CAMELLIA256-SHA           DH 2048    Camellia    256      TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
 xc019   AECDH-AES256-SHA                  ECDH 253   AES         256      TLS_ECDH_anon_WITH_AES_256_CBC_SHA
 x3a     ADH-AES256-SHA                    DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA
 x89     ADH-CAMELLIA256-SHA               DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA
 x35     AES256-SHA                        RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA
 x84     CAMELLIA256-SHA                   RSA        Camellia    256      TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
 xc013   ECDHE-RSA-AES128-SHA              ECDH 253   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
 x33     DHE-RSA-AES128-SHA                DH 2048    AES         128      TLS_DHE_RSA_WITH_AES_128_CBC_SHA
 x9a     DHE-RSA-SEED-SHA                  DH 2048    SEED        128      TLS_DHE_RSA_WITH_SEED_CBC_SHA
 x45     DHE-RSA-CAMELLIA128-SHA           DH 2048    Camellia    128      TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
 xc018   AECDH-AES128-SHA                  ECDH 253   AES         128      TLS_ECDH_anon_WITH_AES_128_CBC_SHA
 x34     ADH-AES128-SHA                    DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA
 x9b     ADH-SEED-SHA                      DH 2048    SEED        128      TLS_DH_anon_WITH_SEED_CBC_SHA
 x46     ADH-CAMELLIA128-SHA               DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA
 x2f     AES128-SHA                        RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA
 x96     SEED-SHA                          RSA        SEED        128      TLS_RSA_WITH_SEED_CBC_SHA
 x41     CAMELLIA128-SHA                   RSA        Camellia    128      TLS_RSA_WITH_CAMELLIA_128_CBC_SHA
TLS 1.2
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 253   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
 xc028   ECDHE-RSA-AES256-SHA384           ECDH 253   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
 xc014   ECDHE-RSA-AES256-SHA              ECDH 253   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 x9f     DHE-RSA-AES256-GCM-SHA384         DH 2048    AESGCM      256      TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 253   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
 xccaa   DHE-RSA-CHACHA20-POLY1305         DH 2048    ChaCha20    256      TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
 xc0a3   DHE-RSA-AES256-CCM8               DH 2048    AESCCM8     256      TLS_DHE_RSA_WITH_AES_256_CCM_8
 xc09f   DHE-RSA-AES256-CCM                DH 2048    AESCCM      256      TLS_DHE_RSA_WITH_AES_256_CCM
 x6b     DHE-RSA-AES256-SHA256             DH 2048    AES         256      TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
 x39     DHE-RSA-AES256-SHA                DH 2048    AES         256      TLS_DHE_RSA_WITH_AES_256_CBC_SHA
 xc077   ECDHE-RSA-CAMELLIA256-SHA384      ECDH 253   Camellia    256      TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384
 xc4     DHE-RSA-CAMELLIA256-SHA256        DH 2048    Camellia    256      TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256
 x88     DHE-RSA-CAMELLIA256-SHA           DH 2048    Camellia    256      TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA
 xc019   AECDH-AES256-SHA                  ECDH 253   AES         256      TLS_ECDH_anon_WITH_AES_256_CBC_SHA
 xa7     ADH-AES256-GCM-SHA384             DH 2048    AESGCM      256      TLS_DH_anon_WITH_AES_256_GCM_SHA384
 x6d     ADH-AES256-SHA256                 DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA256
 x3a     ADH-AES256-SHA                    DH 2048    AES         256      TLS_DH_anon_WITH_AES_256_CBC_SHA
 xc5     ADH-CAMELLIA256-SHA256            DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA256
 x89     ADH-CAMELLIA256-SHA               DH 2048    Camellia    256      TLS_DH_anon_WITH_CAMELLIA_256_CBC_SHA
 x9d     AES256-GCM-SHA384                 RSA        AESGCM      256      TLS_RSA_WITH_AES_256_GCM_SHA384
 xc0a1   AES256-CCM8                       RSA        AESCCM8     256      TLS_RSA_WITH_AES_256_CCM_8
 xc09d   AES256-CCM                        RSA        AESCCM      256      TLS_RSA_WITH_AES_256_CCM
 x3d     AES256-SHA256                     RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA256
 x35     AES256-SHA                        RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA
 xc0     CAMELLIA256-SHA256                RSA        Camellia    256      TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256
 x84     CAMELLIA256-SHA                   RSA        Camellia    256      TLS_RSA_WITH_CAMELLIA_256_CBC_SHA
 xc051   ARIA256-GCM-SHA384                RSA        ARIAGCM     256      TLS_RSA_WITH_ARIA_256_GCM_SHA384
 xc053   DHE-RSA-ARIA256-GCM-SHA384        DH 2048    ARIAGCM     256      TLS_DHE_RSA_WITH_ARIA_256_GCM_SHA384
 xc061   ECDHE-ARIA256-GCM-SHA384          ECDH 253   ARIAGCM     256      TLS_ECDHE_RSA_WITH_ARIA_256_GCM_SHA384
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 253   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
 xc027   ECDHE-RSA-AES128-SHA256           ECDH 253   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
 xc013   ECDHE-RSA-AES128-SHA              ECDH 253   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
 x9e     DHE-RSA-AES128-GCM-SHA256         DH 2048    AESGCM      128      TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
 xc0a2   DHE-RSA-AES128-CCM8               DH 2048    AESCCM8     128      TLS_DHE_RSA_WITH_AES_128_CCM_8
 xc09e   DHE-RSA-AES128-CCM                DH 2048    AESCCM      128      TLS_DHE_RSA_WITH_AES_128_CCM
 xc0a0   AES128-CCM8                       RSA        AESCCM8     128      TLS_RSA_WITH_AES_128_CCM_8
 xc09c   AES128-CCM                        RSA        AESCCM      128      TLS_RSA_WITH_AES_128_CCM
 x67     DHE-RSA-AES128-SHA256             DH 2048    AES         128      TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
 x33     DHE-RSA-AES128-SHA                DH 2048    AES         128      TLS_DHE_RSA_WITH_AES_128_CBC_SHA
 xc076   ECDHE-RSA-CAMELLIA128-SHA256      ECDH 253   Camellia    128      TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256
 xbe     DHE-RSA-CAMELLIA128-SHA256        DH 2048    Camellia    128      TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256
 x9a     DHE-RSA-SEED-SHA                  DH 2048    SEED        128      TLS_DHE_RSA_WITH_SEED_CBC_SHA
 x45     DHE-RSA-CAMELLIA128-SHA           DH 2048    Camellia    128      TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA
 xc018   AECDH-AES128-SHA                  ECDH 253   AES         128      TLS_ECDH_anon_WITH_AES_128_CBC_SHA
 xa6     ADH-AES128-GCM-SHA256             DH 2048    AESGCM      128      TLS_DH_anon_WITH_AES_128_GCM_SHA256
 x6c     ADH-AES128-SHA256                 DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA256
 x34     ADH-AES128-SHA                    DH 2048    AES         128      TLS_DH_anon_WITH_AES_128_CBC_SHA
 xbf     ADH-CAMELLIA128-SHA256            DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA256
 x9b     ADH-SEED-SHA                      DH 2048    SEED        128      TLS_DH_anon_WITH_SEED_CBC_SHA
 x46     ADH-CAMELLIA128-SHA               DH 2048    Camellia    128      TLS_DH_anon_WITH_CAMELLIA_128_CBC_SHA
 x9c     AES128-GCM-SHA256                 RSA        AESGCM      128      TLS_RSA_WITH_AES_128_GCM_SHA256
 x3c     AES128-SHA256                     RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA256
 x2f     AES128-SHA                        RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA
 xba     CAMELLIA128-SHA256                RSA        Camellia    128      TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256
 x96     SEED-SHA                          RSA        SEED        128      TLS_RSA_WITH_SEED_CBC_SHA
 x41     CAMELLIA128-SHA                   RSA        Camellia    128      TLS_RSA_WITH_CAMELLIA_128_CBC_SHA
 xc050   ARIA128-GCM-SHA256                RSA        ARIAGCM     128      TLS_RSA_WITH_ARIA_128_GCM_SHA256
 xc052   DHE-RSA-ARIA128-GCM-SHA256        DH 2048    ARIAGCM     128      TLS_DHE_RSA_WITH_ARIA_128_GCM_SHA256
 xc060   ECDHE-ARIA128-GCM-SHA256          ECDH 253   ARIAGCM     128      TLS_ECDHE_RSA_WITH_ARIA_128_GCM_SHA256
TLS 1.3
 x1302   TLS_AES_256_GCM_SHA384            ECDH 253   AESGCM      256      TLS_AES_256_GCM_SHA384
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 253   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256
 x1301   TLS_AES_128_GCM_SHA256            ECDH 253   AESGCM      128      TLS_AES_128_GCM_SHA256
With switching our SSL certificate to RSA, we gained around 51 new cipher options, amongst them being ones that also mx01.arz.at claimed to support. FTR, the result from above is what you get with the default settings for postfix v3.5.18, being:
smtpd_tls_ciphers = medium
smtpd_tls_mandatory_ciphers = medium
smtpd_tls_mandatory_exclude_ciphers =
smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3
But the delay between triggering the password reset mail and getting a mail server connect was getting bigger and bigger. Therefore while waiting for the next mail to arrive, I decided to capture the network traffic, to be able to look further into this if it should continue to be failing:
% sudo tshark -n -i eth0 -s 65535 -w arz.pcap -f "host 193.110.182.61 or host 193.110.182.62"
A few hours later the mail server connected again, and the mail went through!
postfix/smtpd[4162835]: connect from mx01.arz.at[193.110.182.61]
postfix/smtpd[4162835]: Anonymous TLS connection established from mx01.arz.at[193.110.182.61]: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)
postfix/smtpd[4162835]: E50D6401E6: client=mx01.arz.at[193.110.182.61]
postfix/smtpd[4162835]: disconnect from mx01.arz.at[193.110.182.61] ehlo=2 starttls=1 mail=1 rcpt=1 data=1 quit=1 commands=7
Now also having the captured network traffic, we can check the details there:
[...]
% tshark -o smtp.decryption:true -r arz.pcap
    1 0.000000000 193.110.182.61   203.0.113.42 TCP 74 24699   25 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=2261106119 TSecr=0 WS=128
    2 0.000042827 203.0.113.42   193.110.182.61 TCP 74 25   24699 [SYN, ACK] Seq=0 Ack=1 Win=65160 Len=0 MSS=1460 SACK_PERM=1 TSval=3233422181 TSecr=2261106119 WS=128
    3 0.020719269 193.110.182.61   203.0.113.42 TCP 66 24699   25 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=2261106139 TSecr=3233422181
    4 0.022883259 203.0.113.42   193.110.182.61 SMTP 96 S: 220 mail.example.com ESMTP
    5 0.043682626 193.110.182.61   203.0.113.42 TCP 66 24699   25 [ACK] Seq=1 Ack=31 Win=29312 Len=0 TSval=2261106162 TSecr=3233422203
    6 0.043799047 193.110.182.61   203.0.113.42 SMTP 84 C: EHLO mx01.arz.at
    7 0.043811363 203.0.113.42   193.110.182.61 TCP 66 25   24699 [ACK] Seq=31 Ack=19 Win=65280 Len=0 TSval=3233422224 TSecr=2261106162
    8 0.043898412 203.0.113.42   193.110.182.61 SMTP 253 S: 250-mail.example.com   PIPELINING   SIZE 20240000   VRFY   ETRN   AUTH PLAIN   AUTH=PLAIN   ENHANCEDSTATUSCODES   8BITMIME   DSN   SMTPUTF8   CHUNKING
    9 0.064625499 193.110.182.61   203.0.113.42 SMTP 72 C: QUIT
   10 0.064750257 203.0.113.42   193.110.182.61 SMTP 81 S: 221 2.0.0 Bye
   11 0.064760200 203.0.113.42   193.110.182.61 TCP 66 25   24699 [FIN, ACK] Seq=233 Ack=25 Win=65280 Len=0 TSval=3233422245 TSecr=2261106183
   12 0.085573715 193.110.182.61   203.0.113.42 TCP 66 24699   25 [FIN, ACK] Seq=25 Ack=234 Win=30336 Len=0 TSval=2261106204 TSecr=3233422245
   13 0.085610229 203.0.113.42   193.110.182.61 TCP 66 25   24699 [ACK] Seq=234 Ack=26 Win=65280 Len=0 TSval=3233422266 TSecr=2261106204
   14 1799.888108373 193.110.182.61   203.0.113.42 TCP 74 10330   25 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=2262906007 TSecr=0 WS=128
   15 1799.888161311 203.0.113.42   193.110.182.61 TCP 74 25   10330 [SYN, ACK] Seq=0 Ack=1 Win=65160 Len=0 MSS=1460 SACK_PERM=1 TSval=3235222069 TSecr=2262906007 WS=128
   16 1799.909030335 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=2262906028 TSecr=3235222069
   17 1799.956621011 203.0.113.42   193.110.182.61 SMTP 96 S: 220 mail.example.com ESMTP
   18 1799.977229656 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=1 Ack=31 Win=29312 Len=0 TSval=2262906096 TSecr=3235222137
   19 1799.977229698 193.110.182.61   203.0.113.42 SMTP 84 C: EHLO mx01.arz.at
   20 1799.977266759 203.0.113.42   193.110.182.61 TCP 66 25   10330 [ACK] Seq=31 Ack=19 Win=65280 Len=0 TSval=3235222158 TSecr=2262906096
   21 1799.977351663 203.0.113.42   193.110.182.61 SMTP 267 S: 250-mail.example.com   PIPELINING   SIZE 20240000   VRFY   ETRN   STARTTLS   AUTH PLAIN   AUTH=PLAIN   ENHANCEDSTATUSCODES   8BITMIME   DSN   SMTPUTF8   CHUNKING
   22 1800.011494861 193.110.182.61   203.0.113.42 SMTP 76 C: STARTTLS
   23 1800.011589267 203.0.113.42   193.110.182.61 SMTP 96 S: 220 2.0.0 Ready to start TLS
   24 1800.032812294 193.110.182.61   203.0.113.42 TLSv1 223 Client Hello
   25 1800.032987264 203.0.113.42   193.110.182.61 TLSv1.2 2962 Server Hello
   26 1800.032995513 203.0.113.42   193.110.182.61 TCP 1266 25   10330 [PSH, ACK] Seq=3158 Ack=186 Win=65152 Len=1200 TSval=3235222214 TSecr=2262906151 [TCP segment of a reassembled PDU]
   27 1800.053546755 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=186 Ack=3158 Win=36096 Len=0 TSval=2262906172 TSecr=3235222214
   28 1800.092852469 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=186 Ack=4358 Win=39040 Len=0 TSval=2262906212 TSecr=3235222214
   29 1800.092892905 203.0.113.42   193.110.182.61 TLSv1.2 900 Certificate, Server Key Exchange, Server Hello Done
   30 1800.113546769 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=186 Ack=5192 Win=41856 Len=0 TSval=2262906232 TSecr=3235222273
   31 1800.114763363 193.110.182.61   203.0.113.42 TLSv1.2 192 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message
   32 1800.115000416 203.0.113.42   193.110.182.61 TLSv1.2 117 Change Cipher Spec, Encrypted Handshake Message
   33 1800.136070200 193.110.182.61   203.0.113.42 TLSv1.2 113 Application Data
   34 1800.136155526 203.0.113.42   193.110.182.61 TLSv1.2 282 Application Data
   35 1800.158854473 193.110.182.61   203.0.113.42 TLSv1.2 162 Application Data
   36 1800.159254794 203.0.113.42   193.110.182.61 TLSv1.2 109 Application Data
   37 1800.180286407 193.110.182.61   203.0.113.42 TLSv1.2 144 Application Data
   38 1800.223005960 203.0.113.42   193.110.182.61 TCP 66 25   10330 [ACK] Seq=5502 Ack=533 Win=65152 Len=0 TSval=3235222404 TSecr=2262906299
   39 1802.230300244 203.0.113.42   193.110.182.61 TLSv1.2 146 Application Data
   40 1802.251994333 193.110.182.61   203.0.113.42 TCP 2962 [TCP segment of a reassembled PDU]
   41 1802.252034015 203.0.113.42   193.110.182.61 TCP 66 25   10330 [ACK] Seq=5582 Ack=3429 Win=63616 Len=0 TSval=3235224433 TSecr=2262908371
   42 1802.252279083 193.110.182.61   203.0.113.42 TLSv1.2 1295 Application Data
   43 1802.252288316 203.0.113.42   193.110.182.61 TCP 66 25   10330 [ACK] Seq=5582 Ack=4658 Win=64128 Len=0 TSval=3235224433 TSecr=2262908371
   44 1802.272816060 193.110.182.61   203.0.113.42 TLSv1.2 833 Application Data, Application Data
   45 1802.272827542 203.0.113.42   193.110.182.61 TCP 66 25   10330 [ACK] Seq=5582 Ack=5425 Win=64128 Len=0 TSval=3235224453 TSecr=2262908392
   46 1802.338807683 203.0.113.42   193.110.182.61 TLSv1.2 131 Application Data
   47 1802.398968611 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=5425 Ack=5647 Win=44800 Len=0 TSval=2262908518 TSecr=3235224519
   48 1863.257457500 193.110.182.61   203.0.113.42 TLSv1.2 101 Application Data
   49 1863.257495688 203.0.113.42   193.110.182.61 TCP 66 25   10330 [ACK] Seq=5647 Ack=5460 Win=64128 Len=0 TSval=3235285438 TSecr=2262969376
   50 1863.257654942 203.0.113.42   193.110.182.61 TLSv1.2 110 Application Data
   51 1863.257721010 203.0.113.42   193.110.182.61 TLSv1.2 97 Encrypted Alert
   52 1863.278242216 193.110.182.61   203.0.113.42 TCP 66 10330   25 [ACK] Seq=5460 Ack=5691 Win=44800 Len=0 TSval=2262969397 TSecr=3235285438
   53 1863.278464176 193.110.182.61   203.0.113.42 TCP 66 10330   25 [RST, ACK] Seq=5460 Ack=5723 Win=44800 Len=0 TSval=2262969397 TSecr=3235285438
% tshark -O tls -r arz.pcap
[...]
Transport Layer Security
    TLSv1 Record Layer: Handshake Protocol: Client Hello
        Content Type: Handshake (22)
        Version: TLS 1.0 (0x0301)
        Length: 152
        Handshake Protocol: Client Hello
            Handshake Type: Client Hello (1)
            Length: 148
            Version: TLS 1.2 (0x0303)
            Random: 4575d1e7c93c09a564edc00b8b56ea6f5d826f8cfe78eb980c451a70a9c5123f
                GMT Unix Time: Dec  5, 2006 21:09:11.000000000 CET
                Random Bytes: c93c09a564edc00b8b56ea6f5d826f8cfe78eb980c451a70a9c5123f
            Session ID Length: 0
            Cipher Suites Length: 26
            Cipher Suites (13 suites)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (0xc027)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
                Cipher Suite: TLS_RSA_WITH_AES_256_GCM_SHA384 (0x009d)
                Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
                Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA256 (0x003d)
                Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA256 (0x003c)
                Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
                Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
                Cipher Suite: TLS_EMPTY_RENEGOTIATION_INFO_SCSV (0x00ff)
[...]
Transport Layer Security
    TLSv1.2 Record Layer: Handshake Protocol: Server Hello
        Content Type: Handshake (22)
        Version: TLS 1.2 (0x0303)
        Length: 89
        Handshake Protocol: Server Hello
            Handshake Type: Server Hello (2)
            Length: 85
            Version: TLS 1.2 (0x0303)
            Random: cf2ed24e3300e95e5f56023bf8b4e5904b862bb2ed8a5796444f574e47524401
                GMT Unix Time: Feb 23, 2080 23:16:46.000000000 CET
                Random Bytes: 3300e95e5f56023bf8b4e5904b862bb2ed8a5796444f574e47524401
            Session ID Length: 32
            Session ID: 63d041b126ecebf857d685abd9d4593c46a3672e1ad76228f3eacf2164f86fb9
            Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030)
[...]
In this network dump we see what cipher suites are offered, and the TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 here is the Cipher Suite Name in IANA/RFC speak. Whis corresponds to the ECDHE-RSA-AES256-GCM-SHA384 in openssl speak (see Mozilla s Mozilla s cipher suite correspondence table), which we also saw in the postfix log. Mission accomplished! :) Now, if we re interested in avoiding certain ciphers and increase security level, we can e.g. get rid of the SEED, CAMELLIA and all anonymous ciphers, and could accept only TLS v1.2 + v1.3, by further adjusting postfix s main.cf:
smtpd_tls_ciphers = high
smtpd_tls_exclude_ciphers = aNULL CAMELLIA
smtpd_tls_mandatory_ciphers = high
smtpd_tls_mandatory_protocols = TLSv1.2 TLSv1.3
smtpd_tls_protocols = TLSv1.2 TLSv1.3
Which would then gives us:
% testssl --cipher-per-proto -t=smtp mail.example.com:25
[...]
Hexcode  Cipher Suite Name (OpenSSL)       KeyExch.   Encryption  Bits     Cipher Suite Name (IANA/RFC)
-----------------------------------------------------------------------------------------------------------------------------
SSLv2
SSLv3
TLS 1
TLS 1.1
TLS 1.2
 xc030   ECDHE-RSA-AES256-GCM-SHA384       ECDH 253   AESGCM      256      TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
 xc028   ECDHE-RSA-AES256-SHA384           ECDH 253   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
 xc014   ECDHE-RSA-AES256-SHA              ECDH 253   AES         256      TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 x9f     DHE-RSA-AES256-GCM-SHA384         DH 2048    AESGCM      256      TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
 xcca8   ECDHE-RSA-CHACHA20-POLY1305       ECDH 253   ChaCha20    256      TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
 xccaa   DHE-RSA-CHACHA20-POLY1305         DH 2048    ChaCha20    256      TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
 xc0a3   DHE-RSA-AES256-CCM8               DH 2048    AESCCM8     256      TLS_DHE_RSA_WITH_AES_256_CCM_8
 xc09f   DHE-RSA-AES256-CCM                DH 2048    AESCCM      256      TLS_DHE_RSA_WITH_AES_256_CCM
 x6b     DHE-RSA-AES256-SHA256             DH 2048    AES         256      TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
 x39     DHE-RSA-AES256-SHA                DH 2048    AES         256      TLS_DHE_RSA_WITH_AES_256_CBC_SHA
 x9d     AES256-GCM-SHA384                 RSA        AESGCM      256      TLS_RSA_WITH_AES_256_GCM_SHA384
 xc0a1   AES256-CCM8                       RSA        AESCCM8     256      TLS_RSA_WITH_AES_256_CCM_8
 xc09d   AES256-CCM                        RSA        AESCCM      256      TLS_RSA_WITH_AES_256_CCM
 x3d     AES256-SHA256                     RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA256
 x35     AES256-SHA                        RSA        AES         256      TLS_RSA_WITH_AES_256_CBC_SHA
 xc051   ARIA256-GCM-SHA384                RSA        ARIAGCM     256      TLS_RSA_WITH_ARIA_256_GCM_SHA384
 xc053   DHE-RSA-ARIA256-GCM-SHA384        DH 2048    ARIAGCM     256      TLS_DHE_RSA_WITH_ARIA_256_GCM_SHA384
 xc061   ECDHE-ARIA256-GCM-SHA384          ECDH 253   ARIAGCM     256      TLS_ECDHE_RSA_WITH_ARIA_256_GCM_SHA384
 xc02f   ECDHE-RSA-AES128-GCM-SHA256       ECDH 253   AESGCM      128      TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
 xc027   ECDHE-RSA-AES128-SHA256           ECDH 253   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
 xc013   ECDHE-RSA-AES128-SHA              ECDH 253   AES         128      TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
 x9e     DHE-RSA-AES128-GCM-SHA256         DH 2048    AESGCM      128      TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
 xc0a2   DHE-RSA-AES128-CCM8               DH 2048    AESCCM8     128      TLS_DHE_RSA_WITH_AES_128_CCM_8
 xc09e   DHE-RSA-AES128-CCM                DH 2048    AESCCM      128      TLS_DHE_RSA_WITH_AES_128_CCM
 xc0a0   AES128-CCM8                       RSA        AESCCM8     128      TLS_RSA_WITH_AES_128_CCM_8
 xc09c   AES128-CCM                        RSA        AESCCM      128      TLS_RSA_WITH_AES_128_CCM
 x67     DHE-RSA-AES128-SHA256             DH 2048    AES         128      TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
 x33     DHE-RSA-AES128-SHA                DH 2048    AES         128      TLS_DHE_RSA_WITH_AES_128_CBC_SHA
 x9c     AES128-GCM-SHA256                 RSA        AESGCM      128      TLS_RSA_WITH_AES_128_GCM_SHA256
 x3c     AES128-SHA256                     RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA256
 x2f     AES128-SHA                        RSA        AES         128      TLS_RSA_WITH_AES_128_CBC_SHA
 xc050   ARIA128-GCM-SHA256                RSA        ARIAGCM     128      TLS_RSA_WITH_ARIA_128_GCM_SHA256
 xc052   DHE-RSA-ARIA128-GCM-SHA256        DH 2048    ARIAGCM     128      TLS_DHE_RSA_WITH_ARIA_128_GCM_SHA256
 xc060   ECDHE-ARIA128-GCM-SHA256          ECDH 253   ARIAGCM     128      TLS_ECDHE_RSA_WITH_ARIA_128_GCM_SHA256
TLS 1.3
 x1302   TLS_AES_256_GCM_SHA384            ECDH 253   AESGCM      256      TLS_AES_256_GCM_SHA384
 x1303   TLS_CHACHA20_POLY1305_SHA256      ECDH 253   ChaCha20    256      TLS_CHACHA20_POLY1305_SHA256
 x1301   TLS_AES_128_GCM_SHA256            ECDH 253   AESGCM      128      TLS_AES_128_GCM_SHA256
Don t forget to also adjust the smpt_tls_* accordingly (for your sending side). For further information see the Postfix TLS Support documentation. Also check out options like tls_ssl_options (setting it to e.g. NO_COMPRESSION) and tls_preempt_cipherlist (setting it to yes would prefer the servers order of ciphers over clients). Conclusions:

21 September 2023

Jonathan Carter: DebConf23

I very, very nearly didn t make it to DebConf this year, I had a bad cold/flu for a few days before I left, and after a negative covid-19 test just minutes before my flight, I decided to take the plunge and travel. This is just everything in chronological order, more or less, it s the only way I could write it.

DebCamp I planned to spend DebCamp working on various issues. Very few of them actually got done, I spent the first few days in bed further recovering, took a covid-19 test when I arrived and after I felt better, and both were negative, so not sure what exactly was wrong with me, but between that and catching up with other Debian duties, I couldn t make any progress on catching up on the packaging work I wanted to do. I ll still post what I intended here, I ll try to take a few days to focus on these some time next month: Calamares / Debian Live stuff:
  • #980209 installation fails at the install boot loader phase
  • #1021156 calamares-settings-debian: Confusing/generic program names
  • #1037299 Install Debian -> Untrusted application launcher
  • #1037123 Minimal HD space required too small for some live images
  • #971003 Console auto-login doesn t work with sysvinit
At least Calamares has been trixiefied in testing, so there s that! Desktop stuff:
  • #1038660 please set a placeholder theme during development, different from any release
  • #1021816 breeze: Background image not shown any more
  • #956102 desktop-base: unwanted metadata within images
  • #605915 please mtheake it a non-native package
  • #681025 Put old themes in a new package named desktop-base-extra
  • #941642 desktop-base: split theme data files and desktop integrations in separate packages
The Egg theme that I want to develop for testing/unstable is based on Juliette Taka s Homeworld theme that was used for Bullseye. Egg, as in, something that hasn t quite hatched yet. Get it? (for #1038660) Debian Social:
  • Set up Lemmy instance
    • I started setting up a Lemmy instance before DebCamp, and meant to finish it.
  • Migrate PeerTube to new server
    • We got a new physical server for our PeerTube instance, we should have more space for growth and it would help us fix the streaming feature on our platform.
Loopy: I intended to get the loop for DebConf in good shape before I left, so that we can spend some time during DebCamp making some really nice content, unfortunately this went very tumbly, but at least we ended up with a loopy that kind of worked and wasn t too horrible. There s always another DebConf to try again, right?
So DebCamp as a usual DebCamp was pretty much a wash (fitting with all the rain we had?) for me, at least it gave me enough time to recover a bit for DebConf proper, and I had enough time left to catch up on some critical DPL duties and put together a few slides for the Bits from the DPL talk.

DebConf Bits From the DPL I had very, very little available time to prepare something for Bits fro the DPL, but I managed to put some slides together (available on my wiki page). I mostly covered:
  • A very quick introduction of myself (I ve done this so many times, it feels redundant giving my history every time), and some introduction on what it is that the DPL does. I declared my intent not to run for DPL again, and the reasoning behind it, and a few bits of information for people who may intend to stand for DPL next year.
  • The sentiment out there for the Debian 12 release (which has been very positive). How we include firmware by default now, and that we re saying goodbye to architectures both GNU/KFreeBSD and mipsel.
  • Debian Day and the 30th birthday party celebrations from local groups all over the world (and a reminder about the Local Groups BoF later in the week).
  • I looked forward to Debian 13 (trixie!), and how we re gaining riscv64 as a release architecture, as well as loongarch64, and that plans seem to be forming to fix 2k38 in Debian, and hopefully largely by the time the Trixie release comes by.
  • I made some comments about Enterprise Linux as people refer to the RHEL eco-system these days, how really bizarre some aspects of it is (like the kernel maintenance), and that some big vendors are choosing to support systems outside of that eco-system now (like CPanel now supporting Ubuntu too). I closed with the quote below from Ian Murdock, and assured the audience that if they want to go out and make money with Debian, they are more than welcome too.
Job Fair I walked through the hallway where the Job Fair was hosted, and enjoyed all the buzz. It s not always easy to get this right, but this year it was very active and energetic, I hope lots of people made some connections! Cheese & Wine Due to state laws and alcohol licenses, we couldn t consume alcohol from outside the state of Kerala in the common areas of the hotel (only in private rooms), so this wasn t quite as big or as fun as our usual C&W parties since we couldn t share as much from our individual countries and cultures, but we always knew that this was going to be the case for this DebConf, and it still ended up being alright. Day Trip I opted for the forest / waterfalls daytrip. It was really, really long with lots of time in the bus. I think our trip s organiser underestimated how long it would take between the points on the route (all in all it wasn t that far, but on a bus on a winding mountain road, it takes long). We left at 8:00 and only found our way back to the hotel around 23:30. Even though we arrived tired and hungry, we saw some beautiful scenery, animals and also met indigenous river people who talked about their struggles against being driven out of their place of living multiple times as government invests in new developments like dams and hydro power. Photos available in the DebConf23 public git repository. Losing a beloved Debian Developer during DebConf To our collective devastation, not everyone made it back from their day trips. Abraham Raji was out to the kayak day trip, and while swimming, got caught by a whirlpool from a drainage system. Even though all of us were properly exhausted and shocked in disbelief at this point, we had to stay up and make some tough decisions. Some initially felt that we had to cancel the rest of DebConf. We also had to figure out how to announce what happened asap both to the larger project and at DebConf in an official manner, while ensuring that due diligence took place and that the family is informed by the police first before making anything public. We ended up cancelling all the talks for the following day, with an address from the DPL in the morning to explain what had happened. Of all the things I ve ever had to do as DPL, this was by far the hardest. The day after that, talks were also cancelled for the morning so that we could attend his funeral. Dozens of DebConf attendees headed out by bus to go pay their final respects, many wearing the t-shirts that Abraham had designed for DebConf. A book of condolences was set up so that everyone who wished to could write a message on how they remembered him. The book will be kept by his family.
Today marks a week since his funeral, and I still feel very raw about it. And even though there was uncertainty whether DebConf should even continue after his death, in hindsight I m glad that everyone pushed forward. While we were all heart broken, it was also heart warming to see people care for each other in all of this. If anything, I think I needed more time at DebConf just to be in that warm aura of emotional support for just a bit longer. There are many people who I wanted to talk to who I barely even had a chance to see. Abraham, or Abru as he was called by some people (which I like because bru in Afrikaans is like bro in English, not sure if that s what it implied locally too) enjoyed artistic pursuits, but he was also passionate about knowledge transfer. He ran classes at DebConf both last year and this year (and I think at other local events too) where he taught people packaging via a quick course that he put together. His enthusiasm for Debian was contagious, a few of the people who he was mentoring came up to me and told me that they were going to see it through and become a DD in honor of him. I can t even remember how I reacted to that, my brain was already so worn out and stitching that together with the tragedy of what happened while at DebConf was just too much for me. I first met him in person last year in Kosovo, I already knew who he was, so I think we interacted during the online events the year before. He was just one of those people who showed so much promise, and I was curious to see what he d achieve in the future. Unfortunately, we was taken away from us too soon. Poetry Evening Later in the week we had the poetry evening. This was the first time I had the courage to recite something. I read Ithaka by C.P. Cavafy (translated by Edmund Keely). The first time I heard about this poem was in an interview with Julian Assange s wife, where she mentioned that he really loves this poem, and it caught my attention because I really like the Weezer song Return to Ithaka and always wondered what it was about, so needless to say, that was another rabbit hole at some point. Group Photo Our DebConf photographer organised another group photo for this event, links to high-res versions available on Aigar s website.
BoFs I didn t attend nearly as many talks this DebConf as I would ve liked (fortunately I can catch up on video, should be released soon), but I did make it to a few BoFs. In the Local Groups BoF, representatives from various local teams were present who introduced themselves and explained what they were doing. From memory (sorry if I left someone out), we had people from Belgium, Brazil, Taiwan and South Africa. We talked about types of events a local group could do (BSPs, Mini DC, sprints, Debian Day, etc. How to help local groups get started, booth kits for conferences, and setting up some form of calendar that lists important Debian events in a way that makes it easier for people to plan and co-ordinate. There s a mailing list for co-ordination of local groups, and the irc channel is -localgroups on oftc.
If you got one of these Cheese & Wine bags from DebConf, that s from the South African local group!
In the Debian.net BoF, we discussed the Debian.net hosting service, where Debian pays for VMs hosted for projects by individual DDs on Debian.net. The idea is that we start some form of census that monitors the services, whether they re still in use, whether the system is up to date, whether someone still cares for it, etc. We had some discussion about where the lines of responsibility are drawn, and we can probably make things a little bit more clear in the documentation. We also want to offer more in terms of backups and monitoring (currently DDs do get 500GB from rsync.net that could be used for backups of their services though). The intention is also to deploy some form of configuration management for some essentials across the hosts. We should also look at getting some sponsored hosting for this. In the Debian Social BoF, we discussed some services that need work / expansion. In particular, Matrix keeps growing at an increased rate as more users use it and more channels are bridged, so it will likely move to its own host with big disks soon. We might replace Pleroma with a fork called Akkoma, this will need some more home work and checking whether it s even feasible. Some services haven t really been used (like Writefreely and Plume), and it might be time to retire them. We might just have to help one or two users migrate some of their posts away if we do retire them. Mjolner seems to do a fine job at spam blocking, we haven t had any notable incidents yet. WordPress now has improved fediverse support, it s unclear whether it works on a multi-site instance yet, I ll test it at some point soon and report back. For upcoming services, we are implementing Lemmy and probably also Mobilizon. A request was made that we also look into Loomio. More Information Overload There s so much that happens at DebConf, it s tough to take it all in, and also, to find time to write about all of it, but I ll mention a few more things that are certainly worth of note. During DebConf, we had some people from the Kite Linux team over. KITE supplies the ICT needs for the primary and secondary schools in the province of Kerala, where they all use Linux. They decided to switch all of these to Debian. There was an ad-hoc BoF where locals were listening and fielding questions that the Kite Linux team had. It was great seeing all the energy and enthusiasm behind this effort, I hope someone will properly blog about this! I learned about the VGLUG Foundation, who are doing a tremendous job at promoting GNU/Linux in the country. They are also training up 50 people a year to be able to provide tech support for Debian. I came across the booth for Mostly Harmless, they liberate old hardware by installing free firmware on there. It was nice seeing all the devices out there that could be liberated, and how it can breathe new life into old harware.
Some hopefully harmless soldering.
Overall, the community and their activities in India are very impressive, and I wish I had more time to get to know everyone better. Food Oh yes, one more thing. The food was great. I tasted more different kinds of curry than I ever did in my whole life up to this point. The lunch on banana leaves was interesting, and also learning how to eat this food properly by hand (thanks to the locals who insisted on teaching me!), it was a fruitful experience? This might catch on at home too less dishes to take care of! Special thanks to the DebConf23 Team I think this may have been one of the toughest DebConfs to organise yet, and I don t think many people outside of the DebConf team knows about all the challenges and adversity this team has faced in organising it. Even just getting to the previous DebConf in Kosovo was a long and tedious and somewhat risky process. Through it all, they were absolute pro s. Not once did I see them get angry or yell at each other, whenever a problem came up, they just dealt with it. They did a really stellar job and I did make a point of telling them on the last day that everyone appreciated all the work that they did. Back to my nest I bought Dax a ball back from India, he seems to have forgiven me for not taking him along.
I ll probably take a few days soon to focus a bit on my bugs and catch up on my original DebCamp goals. If you made it this far, thanks for reading! And thanks to everyone for being such fantastic people.

12 September 2023

Jo Shields: Building a NAS

The status quo Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).
QNAP TS-453mini product photoQNAP TS-453mini product photo
That thing has been in service for about 8 years now, and it s been a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP s OS was not up to the same standard as Synology s perhaps best exemplified by HappyGet 2 , the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless but a bad omen for overall software quality
The logo for QNAP HappyGet 2 and Blizzard's Starcraft 2 side by sideThe logo for QNAP HappyGet 2 and Blizzard s StarCraft 2 side by side
Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second. The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part. So, I decided to start planning a replacement with:
  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust
At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn t when this project started), so I opted to go for a full DIY rather than an appliance not the first time I ve jumped between appliances and DIY for home storage.

Selecting the core of the system There aren t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren t actually compliant Mini-ITX size, they re a proprietary Deep Mini-ITX with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that. I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan. The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load the OEM-only GE suffix chips, which are readily found for import on eBay. In their PRO variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system. The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.
Thermalright AXP120-X67, AMD Ryzen 5 PRO 5650GE, ASRock Rack X570D4I-2T, all assembled and running on a flat surface

Testing up to this point Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn t the best I ve ever used by a long shot, but it s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.
Memtest86 showing test progress, taken from IPMI remote control window
One sad discovery, however, which I ve never seen documented before, on PCIe bifurcation. With PCI Express, you have a number of lanes which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it). It s possible, with motherboard and CPU support, to split PCIe groups up for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card NVME drives these days all use 4x). However with a Cezanne Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) the most bifurcation it allows is 8x4x4x, which is useless in a NAS.
Screenshot of PCIe 16x slot bifurcation options in UEFI settings, taken from IPMI remote control window
As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

Containing the core The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have. That said, on to picking a case. There s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here s how close together the hotswap bay (right) and power supply (left) are:
Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome
With actual cables connected, the cable clearance problem is even worse:
Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome
Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25 -to-2.5 hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25 bay. This is no longer a served market 5.25 bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one however it seems the global supply of new old stock fully dried up in the two weeks between me making a decision and placing an order leaving only the Silverstone case. Icy Dock have a selection of 8-bay 2.5 SATA 5.25 hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen G chips meant I wouldn t be able to run all six bays successfully.
NAS build in Silverstone SUGO 14, mid build, panels removed
Silverstone SUGO 14 from the front, with hot swap bay installed

Actual storage for the storage server My concept for the system always involved a fast boot/cache drive in the motherboard s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles). So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price $1600 of expensive drives vs $3200 of even more expensive drives. That s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I m using about 5TB of the old NAS, so that s a LOT of overhead for expansion.
Storage SSD loaded into hot swap sled

Booting up Bringing it all together is the OS. I wanted an appliance NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).
TrueNAS Dashboard screenshot in browser window
I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:
IOPSBandwidth
4k random writes19.3k75.6 MiB/s
4k random reads36.1k141 MiB/s
Sequential writes 2300 MiB/s
Sequential reads 3800 MiB/s
Results using fio parameters suggested by Huawei
And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:
IOPSBandwidth
4k random writes16k?
4k random reads90k?
Sequential writes 280 MiB/s
Sequential reads 560 MiB/s
Numbers quoted by Intel SSD successors Solidigm.
Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:
IOPSBandwidth
4k random writes4301.7 MiB/s
4k random reads800632 MiB/s
Sequential writes 311 MiB/s
Sequential reads 566 MiB/s
Performance seems pretty OK. There s always going to be an overhead to RAID. I ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance. It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows. And that s it! I built a NAS. I intend to add some fans and more RAM, but that s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!
The final system, powered up
(Also posted on PCPartPicker)

6 September 2023

Jonathan Dowland: VW Lupo mirror-adjustment knob

VW Lupo wing mirror adjustment knob
I designed and printed a replacement knob for the wing-mirror adjustment on a Volkswagen Lupo. The original had a "pineapple" style texture on it. For my print, I sliced with Prusa Slicer and turned on "fuzzy surfaces" to get a texture on the grip-side of the knob.

30 August 2023

Andrew Cater: Building a mirror of various Red Hat oriented "stuff"

Building a mirror for rpm-based distributions.
I've already described in brief how I built a mirror that currently mirrors Debian and Ubuntu on a daily basis. That was relatively straightforward given that I know how to install Debian and configure a basic system without a GUI and the ftpsync scripts are well maintained, I can pull some archives and get one pushed to me such that I've always got up to date copies of Debian and Ubuntu.I wanted to do something similar using Rocky Linux to pull in archives for Almalinux, Rocky Linux, CentOS, CentOS Stream and (optionally) Fedora.(This was originally set up using Red Hat Enterprise Linux on a developer's subscription and rebuilt using Rocky Linux so that the machine could be passed on to someone else if necessary. Red Hat 9.1 has moved to x86_64v2 - on the machine I have (HP Microserver gen 8) 9.1 it fails immediately. It has been rebuilt to use Rocky 8.8).This is a minimal install of Rocky as console only - the machine it's on only has 4G of memory so won't run a GUI reliably. It will run Cockpit so can be remotely administered. One user to run everything - mirror.
Minimal install of Rocky 8.7 from DVD .iso. SELinux is enabled, SSH works for remote access. SELinux had to be tweaked to allow /srv/ the appropriate permissions to be served by nginx. /srv is a large LVM volume rather than a RAID 6 - I didn't have enough disks
Adding nginx, enabling Cockpit and editing the Rocky Linux mirroring scripts resulted in something straightforward to reproduce.

nginx

I cheated and stole large parts of my Debian config. The crucial part to remember is that there is no autoindexing by default and I had to dig to find the correct configuration snippet.

# Load configuration files for the default server block.

include /etc/nginx/default.d/*.conf;

location /
autoindex on;
autoindex_exact_size off;
autoindex_format html;
autoindex_localtime off;
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;

Rocky Linux mirroring scripts

Systemd unit file for service

[Unit]
Description=Rocky Linux Mirroring script

[Service]
Type=simple
User=mirror
Group=mirror
ExecStart=/usr/local/bin/rockylinux

[Install]
WantedBy=multi-user.target

Rocky linux timer file
[Unit]
Description=Run Rocky Linux mirroring script daily

[Timer]
OnCalendar=*-*-* 08:13:00
OnCalendar=*-*-* 22:13:00
Persistent=true

[Install]
WantedBy=timers.target

Mirror script

#!/bin/env bash
#
# mirrorsync - Synchronize a Rocky Linux mirror
# By: Dennis Koerner <koerner@netzwerge.de>
#

# The latest version of this script can be found at:
# https://github.com/rocky-linux/rocky-tools
#
# Please read https://docs.rockylinux.org/en/rocky/8/guides/add_mirror_manager
# for further information on setting up a Rocky mirror.
#
# Copyright (c) 2021 Rocky Enterprise Software Foundation

This is a very long script in total.
Crucial parts I changed only listed the mirror to pull from and the place to put it.

# A complete list of mirrors can be found at
# https://mirrors.rockylinux.org/mirrormanager/mirrors/Rocky
src="mirrors.vinters.com::rocky"

# Your local path. Change to whatever fits your system.
# $mirrormodule is also used in syslog output.
mirrormodule="rocky-linux"
dst="/srv/$ mirrormodule "

filelistfile="fullfiletimelist-rocky"
lockfile="/home/mirror/rocky.lockfile"
logfile="/home/mirror/rocky.log"

Logfile looks something like this: the single time spec file is used to check whether another rsync needs to be run
deleting 9.1/plus/x86_64/os/repodata/3585b8b5-90e0-4856-9df2-95f646bc62c7-PRIMARY.xml.gz

sent 606,565 bytes received 38,808,194,155 bytes 44,839,746.64 bytes/sec
total size is 1,072,593,052,385 speedup is 27.64
End: Fri 27 Jan 2023 08:27:49 GMT
fullfiletimelist-rocky unchanged. Not updating at Fri 27 Jan 2023 22:13:16 GMT
fullfiletimelist-rocky unchanged. Not updating at Sat 28 Jan 2023 08:13:16 GMT
It was essentially easier to store fullfiletimelist-rocky in /home/mirror than anywhere else.

Very similar small modifications to the Rocky mirroring scripts were used to mirror the other distributions I'm mirroring. (Almalinux, CentOS, CentOS Stream, EPEL and Rocky Linux).

29 August 2023

Jonathan Dowland: Gazelle Twin

A couple of releases A couple of releases
I discovered Gazelle Twin last year via Stuart Maconie's Freak Zone (two of her tracks ended up on my 2022 Halloween playlist). Through her website I learned of The Horror Show! exhibition at Somerset House1 in London that I managed to visit earlier this year. I've been intending to write a 5-track blog post (a la Underworld, the Cure, Coil) for a while but I have been spurred on by the excellent news that she's got a new album on the way, and, she's performing at the Sage Gateshead in November. Buy tickets now!! Here's the five tracks I recommend to get started:
  1. Anti-Body, from 2014's UNFLESH. I particularly love the percussion. Perc did a good hard-house-style remix on Fleshed Out, the companion remix album. Anti-Body by Gazelle Twin
  2. Fire Leap, from Gazelle Twin and NYX's collaborative album Deep England. The album is a re-interpretation of material from Gazelle Twin's earlier album, Pastoral, with the exception of this track, which is a cover of Paul Giovanni's song from The Wicker Man. There's a common aesthetic in all three works: eerie-folk, England's self-mythologising as seen through a warped and cracked lens. Anti-Body by Gazelle Twin There's a fantastic performance video of the whole album, directed by Iain Forsyth and Jane Pollard, available on YouTube.
  3. Better In My Day, from the aforementioned Pastoral. This track and this album are, I think, less accessible, more challenging than the re-interpreted material. That's not a bad thing: I'm still working on digesting it! This is one of the more abrasive, confrontational tracks. Better In My Day by Gazelle Twin
  4. I am Shell I am Bone, from way back to her first release, The Entire City in 2011. Composed, recorded, self-produced, self-released. It's remarkable to me how different each phase of GT's work are to one another. This album evokes a strong sense of atmosphere and place to me. There's a hint of possible influence of Joy Division's Unknown Pleasures (or New Order's Movement) in places. The b-side to this song is a cover of Joy Division's The Eternal. I Am Shell I Am Bone by Gazelle Twin
  5. GT re-issued This Entire City in 2022 along with a companion-piece EP of newly-released material from the same era, The Wastelands. This isn't cutting room floor stuff, though, as evidenced by the strength of my final pick, Hole in my Heart. Hole In My Heart by Gazelle Twin
It's hard to pick just five tracks when doing these (that's the point I suppose). I note that I haven't picked anything from her wide-ranging soundtrack work: her last three or four of her releases have been soundtrack works, released on the well-respected UK label Invada, as will be her forthcoming album. You can find all the released stuff on Gazelle Twin's Bandcamp Page.

  1. I recently learned that PJ Harvey once hosted album sessions in Somerset House, and allowed fans to watch her at work during the recording: https://www.theguardian.com/music/2015/jan/16/pj-harvey-somerset-house-recording-in-progress

27 August 2023

Shirish Agarwal: FSCKing /home

There is a bit of context that needs to be shared before I get to this and would be a long one. For reasons known and unknown, I have a lot of sudden electricity outages. Not just me, all those who are on my line. A discussion with a lineman revealed that around 200+ families and businesses are on the same line and when for whatever reason the electricity goes for all. Even some of the traffic lights don t work. This affects software more than hardware or in some cases, both. And more specifically HDD s are vulnerable. I had bought an APC unit several years for precisely this, but over period of time it just couldn t function and trips also when the electricity goes out. It s been 6-7 years so can t even ask customer service to fix the issue and from whatever discussions I have had with APC personnel, the only meaningful difference is to buy a new unit but even then not sure this is an issue that can be resolved, even with that. That comes to the issue that happens once in a while where the system fsck is unable to repair /home and you need to use an external pen drive for the same. This is my how my hdd stacks up
/ is on dev/sda7 /boot is on /dev/sda6, /boot/efi is on /dev/sda2 and /home is on /dev/sda8 so theoretically, if /home for some reason doesn t work I should be able drop down on /dev/sda7, unmount /dev/sda8, run fsck and carry on with my work. I tried it number of times but it didn t work. I was dropping down on tty1 and attempting the same, no dice as root/superuser getting the barest x-term. So first I tried asking couple of friends who live nearby me. Unfortunately, both are MS-Windows users and both use what are called as company-owned laptops . Surfing on those systems were a nightmare. Especially the number of pop-ups of ads that the web has become. And to think about how much harassment ublock origin has saved me over the years. One of the more interesting bits from both their devices were showing all and any downloads from fosshub was showing up as malware. I dunno how much of that is true or not as haven t had to use it as most software we get through debian archives or if needed, download from github or wherever and run/install it and you are in business. Some of them even get compiled into a good .deb package but that s outside the conversation atm. My only experience with fosshub was few years before the pandemic and that was good. I dunno if fosshub really has malware or malwarebytes was giving false positives. It also isn t easy to upload a 600 MB+ ISO file somewhere to see whether it really has malware or not. I used to know of a site or two where you could upload a suspicious file and almost 20-30 famous and known antivirus and anti-malware engines would check it and tell you the result. Unfortunately, I have forgotten the URL and seeing things from MS-Windows perspective, things have gotten way worse than before. So left with no choice, I turned to the local LUG for help. Fortunately, my mobile does have e-mail and I could use gmail to solicit help. While there could have been any number of live CD s that could have helped but one of my first experiences with GNU/Linux was that of Knoppix that I had got from Linux For You (now known as OSFY) sometime in 2003. IIRC, had read an interview of Mr. Klaus Knopper as well and was impressed by it. In those days, Debian wasn t accessible to non-technical users then and Knoppix was a good tool to see it. In fact, think he was the first to come up with the idea of a Live CD and run with it while Canonical/Ubuntu took another 2 years to do it. I think both the CD and the interview by distrowatch was shared by LFY in those early days. Of course, later the story changes after he got married, but I think that is more about Adriane rather than Knoppix. So Vishal Rao helped me out. I got an HP USB 3.2 32GB Type C OTG Flash Drive x5600c (Grey & Black) from a local hardware dealer around similar price point. The dealer is a big one and has almost 200+ people scattered around the city doing channel sales who in turn sell to end users. Asking one of the representatives about their opinion on stopping electronic imports (apparently more things were added later to the list including all sorts of sundry items from digital cameras to shavers and whatnot.) The gentleman replied that he hopes that it would not happen otherwise more than 90% would have to leave their jobs. They already have started into lighting fixtures (LED bulbs, tubelights etc.) but even those would come in the same ban  The main argument as have shared before is that Indian Govt. thinks we need our home grown CPU and while I have no issues with that, as shared before except for RISC-V there is no other space where India could look into doing that. Especially after the Chip Act, Biden has made that any new fabs or any new thing in chip fabrication will only be shared with Five Eyes only. Also, while India is looking to generate about 2000 GW by 2030 by solar, China has an ambitious 20,000 GW generation capacity by the end of this year and the Chinese are the ones who are actually driving down the module prices. The Chinese are also automating their factories as if there s no tomorrow. The end result of both is that China will continue to be the world s factory floor for the foreseeable future and whoever may try whatever policies, it probably is gonna be difficult to compete with them on prices of electronic products. That s the reason the U.S. has been trying so that China doesn t get the latest technology but that perhaps is a story for another day.

HP USB 3.2 Type C OTG Flash Drive x5600c For people who have had read this blog they know that most of the flash drives today are MLC Drives and do not have the longevity of the SLC Drives. For those who maybe are new, this short brochure/explainer from Kingston should enhance your understanding. SLC Drives are rare and expensive. There are also a huge number of counterfeit flash drives available in the market and almost all the companies efforts whether it s Kingston, HP or any other manufacturer, they have been like a drop in the bucket. Coming back to the topic at hand. While there are some tools that can help you to figure out whether a pen drive is genuine or not. While there are products that can tell you whether they are genuine or not (basically by probing the memory controller and the info. you get from that.) that probably is a discussion left for another day. It took me couple of days and finally I was able to find time to go Vishal s place. The journey of back and forth lasted almost 6 hours, with crazy traffic jams. Tells you why Pune or specifically the Swargate, Hadapsar patch really needs a Metro. While an in-principle nod has been given, it probably is more than 5-7 years or more before we actually have a functioning metro. Even the current route the Metro has was supposed to be done almost 5 years to the date and even the modified plan was of 3 years ago. And even now, most of the Stations still need a lot of work to be done. PMC, Deccan as examples etc. still have loads to be done. Even PMT (Pune Muncipal Transport) that that is supposed to do the last mile connections via its buses has been putting half-hearted attempts

Vishal Rao While Vishal had apparently seen me and perhaps we had also interacted, this was my first memory of him although we have been on a few boards now and then including stackexchange. He was genuine and warm and shared 4-5 distros with me, including Knoppix and System Rescue as shared by Arun Khan. While this is and was the first time I had heard about Ventoy apparently Vishal has been using it for couple of years now. It s a simple shell script that you need to download and run on your pen drive and then just dump all the .iso images. The easiest way to explain ventoy is that it looks and feels like Grub. Which also reminds me an interaction I had with Vishal on mobile. While troubleshooting the issue, I was unsure whether it was filesystem that was the issue or also systemd was corrupted. Vishal reminded me of putting fastboot to the kernel parameters to see if I m able to boot without fscking and get into userspace i.e. /home. Although journalctl and systemctl were responding even on tty1 still was a bit apprehensive. Using fastboot was able to mount the whole thing and get into userspace and that told me that it s only some of the inodes that need clearing and there probably are some orphaned inodes. While Vishal had got a mini-pc he uses that a server, downloads stuff to it and then downloads stuff from it. From both privacy, backup etc. it is a better way to do things but then you need to laptop to access it. I am sure he probably uses it for virtualization and other ways as well but we just didn t have time for that discussion. Also a mini-pc can set you back anywhere from 25 to 40k depending on the mini-pc and the RAM and the SSD. And you need either a lappy or an Raspberry Pi with some kinda visual display to interact with the mini-pc. While he did share some of the things, there probably could have been a far longer interaction just on that but probably best left for another day. Now at my end, the system I had bought is about 5-6 years old. At that time it only had 6 USB 2.0 drives and 2 USB 3.0 (A) drives.
The above image does tell of the various form factors. One of the other things is that I found the pendrive and its connectors to be extremely fiddly. It took me number of times fiddling around with it when I was finally able to put in and able to access the pen drive partitions. Unfortunately, was unable to see/use systemrescue but Knoppix booted up fine. I mounted the partitions briefly to see where is what and sure enough /dev/sda8 showed my /home files and folders. Unmounted it, then used $fsck -y /dev/sda8 and back in business. This concludes what happened. Updates Quite a bit was left out on the original post, part of which I didn t know and partly stuff which is interesting and perhaps need a blog post of their own. It s sad I won t be part of debconf otherwise who knows what else I would have come to know.
  1. One of the interesting bits that I came to know about last week is the Alibaba T-Head T-Head TH1520 RISC-V CPU and saw it first being demoed on a laptop and then a standalone tablet. The laptop is an interesting proposition considering Alibaba opened up it s chip thing only couple of years ago. To have an SOC within 18 months and then under production for lappies and tablets is practically unheard of especially of a newbie/startup. Even AMD took 3-4 years for its first chip.It seems they (Alibaba) would be parceling them out by quarter end 2023 and another 1000 pieces/Units first quarter next year, while the scale is nothing compared to the behemoths, I think this would be more as a matter of getting feedback on both the hardware and software. The value proposition is much better than what most of us get, at least in India. For example, they are doing a warranty for 5 years and also giving spare parts. RISC-V has been having a lot of resurgence in China in part as its an open standard and partly development will be far cheaper and faster than trying x86 or x86-64. If you look into both the manufacturers, due to monopoly, both of them now give 5-8% increment per year, and if you look back in history, you would find that when more chips were in competition, they used to give 15-20% performance increment per year.
2. While Vishal did share with me what he used and the various ways he uses the mini-pc, I did have a fun speculating on what he could use it. As shared by Romane as his case has shared, the first thing to my mind was backups. Filesystems are notorious in the sense they can be corrupted or can be prone to be corrupted very easily as can be seen above  . Backups certainly make a lot of sense, especially rsync. The other thing that came to my mind was having some sort of A.I. and chat server. IIRC, somebody has put quite a bit of open source public domain data in debian servers that could be used to run either a chatbot or an A.I. or both and use that similar to how chatGPT but with much limited scope than what chatgpt uses. I was also thinking a media server which Vishal did share he does. I may probably visit him sometime to see what choices he did and what he learned in the process, if anything. Another thing that could be done is just take a dump of any of commodity markets or any markets and have some sort of predictive A.I. or whatever. A whole bunch of people have scammed thousands of Indian users on this, but if you do it on your own and for your own purposes to aid you buy and sell stocks or whatever commodity you may fancy. After all, nowadays markets themselves are virtual. While Vishal s mini-pc doesn t have any graphics, if it was an AMD APU mini-pc, something like this he could have hosted games in the way of thick server, thin client where all graphics processing happens on the server rather than the client. With virtual reality I think the case for the same case could be made or much more. The only problem with VR/AR is that we don t really have mass-market googles, eye pieces or headset. The only notable project that Google has/had in that place is the Google VR Cardboard headset and the experience is not that great or at least was not that great few years back when I could hear and experience the same. Most of the VR headsets say for example the Meta Quest 2 is for around INR 44k/- while Quest 3 is INR 50k+ and officially not available. As have shared before, the holy grail of VR would be when it falls below INR 10k/- so it becomes just another accessory, not something you really have to save for. There also isn t much content on that but then that is also the whole chicken or egg situation. This again is a non-stop discussion as so much has been happening in that space it needs its own blog post/article whatever. Till later.

26 August 2023

Christoph Berg: PostgreSQL Popularity Contest

Back in 2015, when PostgreSQL 9.5 alpha 1 was released, I had posted the PostgreSQL data from Debian's popularity contest. 8 years and 8 PostgreSQL releases later, the graph now looks like this: Currently, the most popular PostgreSQL on Debian systems is still PostgreSQL 13 (shipped in Bullseye), followed by PostgreSQL 11 (Buster). At the time of writing, PostgreSQL 9.6 (Stretch) and PostgreSQL 15 (Bookworm) share the third place, with 15 rising quickly.

21 August 2023

Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR: Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.
The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you? So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community. All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]: Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API. For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations. A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux? Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management. The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:
  • DRM CRTC de-gamma: used to convert the framebuffer s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again: Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram: The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:
  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;
Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware? So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information. I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM. Unfortunately, I couldn t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously. Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine. Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view): In the next blog post, I ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver. * Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities. ** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline. *** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

Jonathan Dowland: FreshRSS

Now that it's more convenient for me to run containers at home, I thought I'd write a bit about web apps I am enjoying. First up, FreshRSS, a web feed aggregator. I used to make heavy use of Google Reader until Google killed it, and although a bunch of self-hosted cloned sprung up very quickly afterwards, I didn't transition to any of them. Then followed a number of years within which, in retrospect, I basically didn't do a great job of organising reading the web. This lasted until a couple of years ago when, on a whim, I tried out NetNewsWire for iOS. NetNewsWire is a well-established and much-loved feed reader for Mac which I've never used. I used the iOS version in isolation for a long time: only dipping into web feeds on my phone and never on another device. I'd like to see the old web back, and do my part to make that happen. I've continually published rss and atom feeds for my own blog. I'm also trying to blog more: this might be easier now that Twitter (which, IMHO, took a lot of the energy for quick writing out of blogging) is mortally wounded. So, I'm giving FreshRSS a go. Early signs are good: it's fast, lightweight, easy to use, vaguely resembles how I remember Google Reader, and has a native dark-mode. It's still early days building up a new list of feeds to follow. I'll be sure to share interesting ones as I discover them!

20 August 2023

Russell Coker: GPT Systems and Relationships

Sam Hartman wrote an interesting blog post about his work as a sex and intimacy educator and how GPT systems could impact that [1]. I ve read some positive reviews of Replika a commercial system that is somewhat promoted as a counsellor [2], so I decided to try it out. In my brief trial it seemed to be using all the methods that Android pay to play games are known for. Having multiple types of in-game currency, pay to buy new clothes etc for your friend, etc. Basically it seems pretty horrible. I didn t pay for it and the erotic and romantic features all require payment so I didn t test that. When thinking about this logically, having a system designed to deal with people when they are vulnerable (either being in a romantic relationship or getting counselling) that uses manipulative techniques to get money from them can t have a good result. So a free software system seems the best option. When I first learned of virtual girlfriends I never thought I would feel compelled to advocate for a free software virtual dating program, but that s where the world has got to. Virtual girlfriends have been around for years now. Several years ago I watched a documentary about their use in Japan. It seemed a bit strange when a group of men who had virtual girlfriends had a dinner party with their tablets and phones propped up so their girlfriends could join in as they all appeared to be dating the same girl. The documentary didn t go in to enough detail to cover whether the girlfriend app could learn or be customised enough that they would seem to have different personalities. Virtual boyfriends have also been around for a while apparently without most people noticing. I just Googled it and found a review of a virtual boyfriend app published in 2016! One thing that will probably concern people is the possibility for virtual dating systems to be used for inappropriate things. That is a reasonable thing to be concerned about but I don t think it s possible to prevent technology that has already been released from doing such things. As a general rule technology can always be used for good and bad things so we need to just make it easy to do good things and let the legal system develop ways of dealing with the bad things.

13 August 2023

Jonathan Dowland: Terrain base for 3D castle

terrain base for the castle
I designed and printed a "terrain" base for my 3D castle in OpenSCAD. The castle was the first thing I designed and printed on our (then new) office 3D printer. I use it as a test bed if I want to try something new, and this time I wanted to try procedurally generating a model. I've released the OpenSCAD source for the terrain generator under the name Zarchscape. mid 90s terrain generation
Lots of mid-90s games had very boxy floors Lots of mid-90s games had very boxy floors
Terrain generation, 90s-style. From [this article](https://web.archive.org/web/19990822085321/http://www.gamedesign.net/tutorials/pavlock/cool-ass-terrain/) Terrain generation, 90s-style. From this article
Back in the 90s I spent some time designing maps/levels/arenas for Quake and its sibling games (like Half-Life), mostly in the tool Worldcraft. A lot of beginner maps (including my own), ended up looking pretty boxy. I once stumbled across an article blog post that taught my a useful trick for making more natural-looking terrain. In brief: tessellate the floor region with triangle polygons, then randomly add some jitter to the z-dimension for their vertices. A really simple technique with fairly dramatic results. OpenSCAD Doing the same in OpenSCAD stretched me, and I think stretched OpenSCAD. It left me with some opinions which I'll try to write up in a future blog post. Final results
multicolour
I've generated and printed the result a couple of times, including an attempt a multicolour print. At home, I have a large spool of brown-coloured recycled PLA, and many small lengths of samples in various colours (that I picked up at Maker Faire Czech Republic last year), including some short lengths of green. My home printer is a Prusa Mini, and I cheaped out and didn't buy the filament runout sensor, which would detect when the current filament ran out and let me handle the situation gracefully. Instead, I added several colour change instructions to the g-code at various heights, hoping that whatever plastic I loaded for each layer was enough to get the print to the next colour change instruction. The results are a little mixed I think. I didn't catch the final layer running out in time (forgetting that the Bowden tube also means I need to catch it running out before the loading gear, a few inches earlier than the nozzle), so the final lush green colour ends prematurely. I've also got a fair bit of stringing to clean up. Finally, all these non-flat planes really show up some of the limitations of regular Slicing. It would be interesting to try this with a non-planar Slicer.

Next.

Previous.